Jan 30 22:42:27 np0005603435 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 30 22:42:27 np0005603435 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 30 22:42:27 np0005603435 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 30 22:42:27 np0005603435 kernel: BIOS-provided physical RAM map:
Jan 30 22:42:27 np0005603435 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 30 22:42:27 np0005603435 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 30 22:42:27 np0005603435 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 30 22:42:27 np0005603435 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 30 22:42:27 np0005603435 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 30 22:42:27 np0005603435 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 30 22:42:27 np0005603435 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 30 22:42:27 np0005603435 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 30 22:42:27 np0005603435 kernel: NX (Execute Disable) protection: active
Jan 30 22:42:27 np0005603435 kernel: APIC: Static calls initialized
Jan 30 22:42:27 np0005603435 kernel: SMBIOS 2.8 present.
Jan 30 22:42:27 np0005603435 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 30 22:42:27 np0005603435 kernel: Hypervisor detected: KVM
Jan 30 22:42:27 np0005603435 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 30 22:42:27 np0005603435 kernel: kvm-clock: using sched offset of 4364673630 cycles
Jan 30 22:42:27 np0005603435 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 30 22:42:27 np0005603435 kernel: tsc: Detected 2800.000 MHz processor
Jan 30 22:42:27 np0005603435 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 30 22:42:27 np0005603435 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 30 22:42:27 np0005603435 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 30 22:42:27 np0005603435 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 30 22:42:27 np0005603435 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 30 22:42:27 np0005603435 kernel: Using GB pages for direct mapping
Jan 30 22:42:27 np0005603435 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 30 22:42:27 np0005603435 kernel: ACPI: Early table checksum verification disabled
Jan 30 22:42:27 np0005603435 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 30 22:42:27 np0005603435 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 30 22:42:27 np0005603435 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 30 22:42:27 np0005603435 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 30 22:42:27 np0005603435 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 30 22:42:27 np0005603435 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 30 22:42:27 np0005603435 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 30 22:42:27 np0005603435 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 30 22:42:27 np0005603435 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 30 22:42:27 np0005603435 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 30 22:42:27 np0005603435 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 30 22:42:27 np0005603435 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 30 22:42:27 np0005603435 kernel: No NUMA configuration found
Jan 30 22:42:27 np0005603435 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 30 22:42:27 np0005603435 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 30 22:42:27 np0005603435 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 30 22:42:27 np0005603435 kernel: Zone ranges:
Jan 30 22:42:27 np0005603435 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 30 22:42:27 np0005603435 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 30 22:42:27 np0005603435 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 30 22:42:27 np0005603435 kernel:  Device   empty
Jan 30 22:42:27 np0005603435 kernel: Movable zone start for each node
Jan 30 22:42:27 np0005603435 kernel: Early memory node ranges
Jan 30 22:42:27 np0005603435 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 30 22:42:27 np0005603435 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 30 22:42:27 np0005603435 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 30 22:42:27 np0005603435 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 30 22:42:27 np0005603435 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 30 22:42:27 np0005603435 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 30 22:42:27 np0005603435 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 30 22:42:27 np0005603435 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 30 22:42:27 np0005603435 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 30 22:42:27 np0005603435 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 30 22:42:27 np0005603435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 30 22:42:27 np0005603435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 30 22:42:27 np0005603435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 30 22:42:27 np0005603435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 30 22:42:27 np0005603435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 30 22:42:27 np0005603435 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 30 22:42:27 np0005603435 kernel: TSC deadline timer available
Jan 30 22:42:27 np0005603435 kernel: CPU topo: Max. logical packages:   8
Jan 30 22:42:27 np0005603435 kernel: CPU topo: Max. logical dies:       8
Jan 30 22:42:27 np0005603435 kernel: CPU topo: Max. dies per package:   1
Jan 30 22:42:27 np0005603435 kernel: CPU topo: Max. threads per core:   1
Jan 30 22:42:27 np0005603435 kernel: CPU topo: Num. cores per package:     1
Jan 30 22:42:27 np0005603435 kernel: CPU topo: Num. threads per package:   1
Jan 30 22:42:27 np0005603435 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 30 22:42:27 np0005603435 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 30 22:42:27 np0005603435 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 30 22:42:27 np0005603435 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 30 22:42:27 np0005603435 kernel: Booting paravirtualized kernel on KVM
Jan 30 22:42:27 np0005603435 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 30 22:42:27 np0005603435 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 30 22:42:27 np0005603435 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 30 22:42:27 np0005603435 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 30 22:42:27 np0005603435 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 30 22:42:27 np0005603435 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 30 22:42:27 np0005603435 kernel: random: crng init done
Jan 30 22:42:27 np0005603435 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: Fallback order for Node 0: 0 
Jan 30 22:42:27 np0005603435 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 30 22:42:27 np0005603435 kernel: Policy zone: Normal
Jan 30 22:42:27 np0005603435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 30 22:42:27 np0005603435 kernel: software IO TLB: area num 8.
Jan 30 22:42:27 np0005603435 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 30 22:42:27 np0005603435 kernel: ftrace: allocating 49438 entries in 194 pages
Jan 30 22:42:27 np0005603435 kernel: ftrace: allocated 194 pages with 3 groups
Jan 30 22:42:27 np0005603435 kernel: Dynamic Preempt: voluntary
Jan 30 22:42:27 np0005603435 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 30 22:42:27 np0005603435 kernel: rcu: #011RCU event tracing is enabled.
Jan 30 22:42:27 np0005603435 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 30 22:42:27 np0005603435 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 30 22:42:27 np0005603435 kernel: #011Rude variant of Tasks RCU enabled.
Jan 30 22:42:27 np0005603435 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 30 22:42:27 np0005603435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 30 22:42:27 np0005603435 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 30 22:42:27 np0005603435 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 30 22:42:27 np0005603435 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 30 22:42:27 np0005603435 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 30 22:42:27 np0005603435 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 30 22:42:27 np0005603435 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 30 22:42:27 np0005603435 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 30 22:42:27 np0005603435 kernel: Console: colour VGA+ 80x25
Jan 30 22:42:27 np0005603435 kernel: printk: console [ttyS0] enabled
Jan 30 22:42:27 np0005603435 kernel: ACPI: Core revision 20230331
Jan 30 22:42:27 np0005603435 kernel: APIC: Switch to symmetric I/O mode setup
Jan 30 22:42:27 np0005603435 kernel: x2apic enabled
Jan 30 22:42:27 np0005603435 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 30 22:42:27 np0005603435 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 30 22:42:27 np0005603435 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 30 22:42:27 np0005603435 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 30 22:42:27 np0005603435 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 30 22:42:27 np0005603435 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 30 22:42:27 np0005603435 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 30 22:42:27 np0005603435 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 30 22:42:27 np0005603435 kernel: Spectre V2 : Mitigation: Retpolines
Jan 30 22:42:27 np0005603435 kernel: RETBleed: Mitigation: untrained return thunk
Jan 30 22:42:27 np0005603435 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 30 22:42:27 np0005603435 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 30 22:42:27 np0005603435 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 30 22:42:27 np0005603435 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 30 22:42:27 np0005603435 kernel: active return thunk: retbleed_return_thunk
Jan 30 22:42:27 np0005603435 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 30 22:42:27 np0005603435 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 30 22:42:27 np0005603435 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 30 22:42:27 np0005603435 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 30 22:42:27 np0005603435 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 30 22:42:27 np0005603435 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 30 22:42:27 np0005603435 kernel: Freeing SMP alternatives memory: 40K
Jan 30 22:42:27 np0005603435 kernel: pid_max: default: 32768 minimum: 301
Jan 30 22:42:27 np0005603435 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 30 22:42:27 np0005603435 kernel: landlock: Up and running.
Jan 30 22:42:27 np0005603435 kernel: Yama: becoming mindful.
Jan 30 22:42:27 np0005603435 kernel: SELinux:  Initializing.
Jan 30 22:42:27 np0005603435 kernel: LSM support for eBPF active
Jan 30 22:42:27 np0005603435 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 30 22:42:27 np0005603435 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 30 22:42:27 np0005603435 kernel: ... version:                0
Jan 30 22:42:27 np0005603435 kernel: ... bit width:              48
Jan 30 22:42:27 np0005603435 kernel: ... generic registers:      6
Jan 30 22:42:27 np0005603435 kernel: ... value mask:             0000ffffffffffff
Jan 30 22:42:27 np0005603435 kernel: ... max period:             00007fffffffffff
Jan 30 22:42:27 np0005603435 kernel: ... fixed-purpose events:   0
Jan 30 22:42:27 np0005603435 kernel: ... event mask:             000000000000003f
Jan 30 22:42:27 np0005603435 kernel: signal: max sigframe size: 1776
Jan 30 22:42:27 np0005603435 kernel: rcu: Hierarchical SRCU implementation.
Jan 30 22:42:27 np0005603435 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 30 22:42:27 np0005603435 kernel: smp: Bringing up secondary CPUs ...
Jan 30 22:42:27 np0005603435 kernel: smpboot: x86: Booting SMP configuration:
Jan 30 22:42:27 np0005603435 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 30 22:42:27 np0005603435 kernel: smp: Brought up 1 node, 8 CPUs
Jan 30 22:42:27 np0005603435 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 30 22:42:27 np0005603435 kernel: node 0 deferred pages initialised in 10ms
Jan 30 22:42:27 np0005603435 kernel: Memory: 7763476K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618408K reserved, 0K cma-reserved)
Jan 30 22:42:27 np0005603435 kernel: devtmpfs: initialized
Jan 30 22:42:27 np0005603435 kernel: x86/mm: Memory block size: 128MB
Jan 30 22:42:27 np0005603435 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 30 22:42:27 np0005603435 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 30 22:42:27 np0005603435 kernel: pinctrl core: initialized pinctrl subsystem
Jan 30 22:42:27 np0005603435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 30 22:42:27 np0005603435 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 30 22:42:27 np0005603435 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 30 22:42:27 np0005603435 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 30 22:42:27 np0005603435 kernel: audit: initializing netlink subsys (disabled)
Jan 30 22:42:27 np0005603435 kernel: audit: type=2000 audit(1769830946.647:1): state=initialized audit_enabled=0 res=1
Jan 30 22:42:27 np0005603435 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 30 22:42:27 np0005603435 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 30 22:42:27 np0005603435 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 30 22:42:27 np0005603435 kernel: cpuidle: using governor menu
Jan 30 22:42:27 np0005603435 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 30 22:42:27 np0005603435 kernel: PCI: Using configuration type 1 for base access
Jan 30 22:42:27 np0005603435 kernel: PCI: Using configuration type 1 for extended access
Jan 30 22:42:27 np0005603435 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 30 22:42:27 np0005603435 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 30 22:42:27 np0005603435 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 30 22:42:27 np0005603435 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 30 22:42:27 np0005603435 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 30 22:42:27 np0005603435 kernel: Demotion targets for Node 0: null
Jan 30 22:42:27 np0005603435 kernel: cryptd: max_cpu_qlen set to 1000
Jan 30 22:42:27 np0005603435 kernel: ACPI: Added _OSI(Module Device)
Jan 30 22:42:27 np0005603435 kernel: ACPI: Added _OSI(Processor Device)
Jan 30 22:42:27 np0005603435 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 30 22:42:27 np0005603435 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 30 22:42:27 np0005603435 kernel: ACPI: Interpreter enabled
Jan 30 22:42:27 np0005603435 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 30 22:42:27 np0005603435 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 30 22:42:27 np0005603435 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 30 22:42:27 np0005603435 kernel: PCI: Using E820 reservations for host bridge windows
Jan 30 22:42:27 np0005603435 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 30 22:42:27 np0005603435 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 30 22:42:27 np0005603435 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [3] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [4] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [5] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [6] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [7] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [8] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [9] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [10] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [11] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [12] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [13] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [14] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [15] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [16] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [17] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [18] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [19] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [20] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [21] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [22] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [23] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [24] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [25] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [26] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [27] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [28] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [29] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [30] registered
Jan 30 22:42:27 np0005603435 kernel: acpiphp: Slot [31] registered
Jan 30 22:42:27 np0005603435 kernel: PCI host bridge to bus 0000:00
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 30 22:42:27 np0005603435 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 30 22:42:27 np0005603435 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 30 22:42:27 np0005603435 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 30 22:42:27 np0005603435 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 30 22:42:27 np0005603435 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 30 22:42:27 np0005603435 kernel: iommu: Default domain type: Translated
Jan 30 22:42:27 np0005603435 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 30 22:42:27 np0005603435 kernel: SCSI subsystem initialized
Jan 30 22:42:27 np0005603435 kernel: ACPI: bus type USB registered
Jan 30 22:42:27 np0005603435 kernel: usbcore: registered new interface driver usbfs
Jan 30 22:42:27 np0005603435 kernel: usbcore: registered new interface driver hub
Jan 30 22:42:27 np0005603435 kernel: usbcore: registered new device driver usb
Jan 30 22:42:27 np0005603435 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 30 22:42:27 np0005603435 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 30 22:42:27 np0005603435 kernel: PTP clock support registered
Jan 30 22:42:27 np0005603435 kernel: EDAC MC: Ver: 3.0.0
Jan 30 22:42:27 np0005603435 kernel: NetLabel: Initializing
Jan 30 22:42:27 np0005603435 kernel: NetLabel:  domain hash size = 128
Jan 30 22:42:27 np0005603435 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 30 22:42:27 np0005603435 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 30 22:42:27 np0005603435 kernel: PCI: Using ACPI for IRQ routing
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 30 22:42:27 np0005603435 kernel: vgaarb: loaded
Jan 30 22:42:27 np0005603435 kernel: clocksource: Switched to clocksource kvm-clock
Jan 30 22:42:27 np0005603435 kernel: VFS: Disk quotas dquot_6.6.0
Jan 30 22:42:27 np0005603435 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 30 22:42:27 np0005603435 kernel: pnp: PnP ACPI init
Jan 30 22:42:27 np0005603435 kernel: pnp: PnP ACPI: found 5 devices
Jan 30 22:42:27 np0005603435 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 30 22:42:27 np0005603435 kernel: NET: Registered PF_INET protocol family
Jan 30 22:42:27 np0005603435 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 30 22:42:27 np0005603435 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 30 22:42:27 np0005603435 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 30 22:42:27 np0005603435 kernel: NET: Registered PF_XDP protocol family
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 30 22:42:27 np0005603435 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 30 22:42:27 np0005603435 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 30 22:42:27 np0005603435 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 31990 usecs
Jan 30 22:42:27 np0005603435 kernel: PCI: CLS 0 bytes, default 64
Jan 30 22:42:27 np0005603435 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 30 22:42:27 np0005603435 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 30 22:42:27 np0005603435 kernel: ACPI: bus type thunderbolt registered
Jan 30 22:42:27 np0005603435 kernel: Trying to unpack rootfs image as initramfs...
Jan 30 22:42:27 np0005603435 kernel: Initialise system trusted keyrings
Jan 30 22:42:27 np0005603435 kernel: Key type blacklist registered
Jan 30 22:42:27 np0005603435 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 30 22:42:27 np0005603435 kernel: zbud: loaded
Jan 30 22:42:27 np0005603435 kernel: integrity: Platform Keyring initialized
Jan 30 22:42:27 np0005603435 kernel: integrity: Machine keyring initialized
Jan 30 22:42:27 np0005603435 kernel: Freeing initrd memory: 88000K
Jan 30 22:42:27 np0005603435 kernel: NET: Registered PF_ALG protocol family
Jan 30 22:42:27 np0005603435 kernel: xor: automatically using best checksumming function   avx       
Jan 30 22:42:27 np0005603435 kernel: Key type asymmetric registered
Jan 30 22:42:27 np0005603435 kernel: Asymmetric key parser 'x509' registered
Jan 30 22:42:27 np0005603435 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 30 22:42:27 np0005603435 kernel: io scheduler mq-deadline registered
Jan 30 22:42:27 np0005603435 kernel: io scheduler kyber registered
Jan 30 22:42:27 np0005603435 kernel: io scheduler bfq registered
Jan 30 22:42:27 np0005603435 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 30 22:42:27 np0005603435 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 30 22:42:27 np0005603435 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 30 22:42:27 np0005603435 kernel: ACPI: button: Power Button [PWRF]
Jan 30 22:42:27 np0005603435 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 30 22:42:27 np0005603435 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 30 22:42:27 np0005603435 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 30 22:42:27 np0005603435 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 30 22:42:27 np0005603435 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 30 22:42:27 np0005603435 kernel: Non-volatile memory driver v1.3
Jan 30 22:42:27 np0005603435 kernel: rdac: device handler registered
Jan 30 22:42:27 np0005603435 kernel: hp_sw: device handler registered
Jan 30 22:42:27 np0005603435 kernel: emc: device handler registered
Jan 30 22:42:27 np0005603435 kernel: alua: device handler registered
Jan 30 22:42:27 np0005603435 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 30 22:42:27 np0005603435 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 30 22:42:27 np0005603435 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 30 22:42:27 np0005603435 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 30 22:42:27 np0005603435 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 30 22:42:27 np0005603435 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 30 22:42:27 np0005603435 kernel: usb usb1: Product: UHCI Host Controller
Jan 30 22:42:27 np0005603435 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 30 22:42:27 np0005603435 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 30 22:42:27 np0005603435 kernel: hub 1-0:1.0: USB hub found
Jan 30 22:42:27 np0005603435 kernel: hub 1-0:1.0: 2 ports detected
Jan 30 22:42:27 np0005603435 kernel: usbcore: registered new interface driver usbserial_generic
Jan 30 22:42:27 np0005603435 kernel: usbserial: USB Serial support registered for generic
Jan 30 22:42:27 np0005603435 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 30 22:42:27 np0005603435 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 30 22:42:27 np0005603435 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 30 22:42:27 np0005603435 kernel: mousedev: PS/2 mouse device common for all mice
Jan 30 22:42:27 np0005603435 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 30 22:42:27 np0005603435 kernel: rtc_cmos 00:04: registered as rtc0
Jan 30 22:42:27 np0005603435 kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T03:42:26 UTC (1769830946)
Jan 30 22:42:27 np0005603435 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 30 22:42:27 np0005603435 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 30 22:42:27 np0005603435 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 30 22:42:27 np0005603435 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 30 22:42:27 np0005603435 kernel: usbcore: registered new interface driver usbhid
Jan 30 22:42:27 np0005603435 kernel: usbhid: USB HID core driver
Jan 30 22:42:27 np0005603435 kernel: drop_monitor: Initializing network drop monitor service
Jan 30 22:42:27 np0005603435 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 30 22:42:27 np0005603435 kernel: Initializing XFRM netlink socket
Jan 30 22:42:27 np0005603435 kernel: NET: Registered PF_INET6 protocol family
Jan 30 22:42:27 np0005603435 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 30 22:42:27 np0005603435 kernel: Segment Routing with IPv6
Jan 30 22:42:27 np0005603435 kernel: NET: Registered PF_PACKET protocol family
Jan 30 22:42:27 np0005603435 kernel: mpls_gso: MPLS GSO support
Jan 30 22:42:27 np0005603435 kernel: IPI shorthand broadcast: enabled
Jan 30 22:42:27 np0005603435 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 30 22:42:27 np0005603435 kernel: AES CTR mode by8 optimization enabled
Jan 30 22:42:27 np0005603435 kernel: sched_clock: Marking stable (990002400, 140097630)->(1254901260, -124801230)
Jan 30 22:42:27 np0005603435 kernel: registered taskstats version 1
Jan 30 22:42:27 np0005603435 kernel: Loading compiled-in X.509 certificates
Jan 30 22:42:27 np0005603435 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 30 22:42:27 np0005603435 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 30 22:42:27 np0005603435 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 30 22:42:27 np0005603435 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 30 22:42:27 np0005603435 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 30 22:42:27 np0005603435 kernel: Demotion targets for Node 0: null
Jan 30 22:42:27 np0005603435 kernel: page_owner is disabled
Jan 30 22:42:27 np0005603435 kernel: Key type .fscrypt registered
Jan 30 22:42:27 np0005603435 kernel: Key type fscrypt-provisioning registered
Jan 30 22:42:27 np0005603435 kernel: Key type big_key registered
Jan 30 22:42:27 np0005603435 kernel: Key type encrypted registered
Jan 30 22:42:27 np0005603435 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 30 22:42:27 np0005603435 kernel: Loading compiled-in module X.509 certificates
Jan 30 22:42:27 np0005603435 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 30 22:42:27 np0005603435 kernel: ima: Allocated hash algorithm: sha256
Jan 30 22:42:27 np0005603435 kernel: ima: No architecture policies found
Jan 30 22:42:27 np0005603435 kernel: evm: Initialising EVM extended attributes:
Jan 30 22:42:27 np0005603435 kernel: evm: security.selinux
Jan 30 22:42:27 np0005603435 kernel: evm: security.SMACK64 (disabled)
Jan 30 22:42:27 np0005603435 kernel: evm: security.SMACK64EXEC (disabled)
Jan 30 22:42:27 np0005603435 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 30 22:42:27 np0005603435 kernel: evm: security.SMACK64MMAP (disabled)
Jan 30 22:42:27 np0005603435 kernel: evm: security.apparmor (disabled)
Jan 30 22:42:27 np0005603435 kernel: evm: security.ima
Jan 30 22:42:27 np0005603435 kernel: evm: security.capability
Jan 30 22:42:27 np0005603435 kernel: evm: HMAC attrs: 0x1
Jan 30 22:42:27 np0005603435 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 30 22:42:27 np0005603435 kernel: Running certificate verification RSA selftest
Jan 30 22:42:27 np0005603435 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 30 22:42:27 np0005603435 kernel: Running certificate verification ECDSA selftest
Jan 30 22:42:27 np0005603435 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 30 22:42:27 np0005603435 kernel: clk: Disabling unused clocks
Jan 30 22:42:27 np0005603435 kernel: Freeing unused decrypted memory: 2028K
Jan 30 22:42:27 np0005603435 kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 30 22:42:27 np0005603435 kernel: Write protecting the kernel read-only data: 30720k
Jan 30 22:42:27 np0005603435 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 30 22:42:27 np0005603435 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 30 22:42:27 np0005603435 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 30 22:42:27 np0005603435 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 30 22:42:27 np0005603435 kernel: usb 1-1: Manufacturer: QEMU
Jan 30 22:42:27 np0005603435 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 30 22:42:27 np0005603435 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 30 22:42:27 np0005603435 kernel: Run /init as init process
Jan 30 22:42:27 np0005603435 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 30 22:42:27 np0005603435 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 30 22:42:27 np0005603435 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 30 22:42:27 np0005603435 systemd: Detected virtualization kvm.
Jan 30 22:42:27 np0005603435 systemd: Detected architecture x86-64.
Jan 30 22:42:27 np0005603435 systemd: Running in initrd.
Jan 30 22:42:27 np0005603435 systemd: No hostname configured, using default hostname.
Jan 30 22:42:27 np0005603435 systemd: Hostname set to <localhost>.
Jan 30 22:42:27 np0005603435 systemd: Initializing machine ID from VM UUID.
Jan 30 22:42:27 np0005603435 systemd: Queued start job for default target Initrd Default Target.
Jan 30 22:42:27 np0005603435 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 30 22:42:27 np0005603435 systemd: Reached target Local Encrypted Volumes.
Jan 30 22:42:27 np0005603435 systemd: Reached target Initrd /usr File System.
Jan 30 22:42:27 np0005603435 systemd: Reached target Local File Systems.
Jan 30 22:42:27 np0005603435 systemd: Reached target Path Units.
Jan 30 22:42:27 np0005603435 systemd: Reached target Slice Units.
Jan 30 22:42:27 np0005603435 systemd: Reached target Swaps.
Jan 30 22:42:27 np0005603435 systemd: Reached target Timer Units.
Jan 30 22:42:27 np0005603435 systemd: Listening on D-Bus System Message Bus Socket.
Jan 30 22:42:27 np0005603435 systemd: Listening on Journal Socket (/dev/log).
Jan 30 22:42:27 np0005603435 systemd: Listening on Journal Socket.
Jan 30 22:42:27 np0005603435 systemd: Listening on udev Control Socket.
Jan 30 22:42:27 np0005603435 systemd: Listening on udev Kernel Socket.
Jan 30 22:42:27 np0005603435 systemd: Reached target Socket Units.
Jan 30 22:42:27 np0005603435 systemd: Starting Create List of Static Device Nodes...
Jan 30 22:42:27 np0005603435 systemd: Starting Journal Service...
Jan 30 22:42:27 np0005603435 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 30 22:42:27 np0005603435 systemd: Starting Apply Kernel Variables...
Jan 30 22:42:27 np0005603435 systemd: Starting Create System Users...
Jan 30 22:42:27 np0005603435 systemd: Starting Setup Virtual Console...
Jan 30 22:42:27 np0005603435 systemd: Finished Create List of Static Device Nodes.
Jan 30 22:42:27 np0005603435 systemd: Finished Apply Kernel Variables.
Jan 30 22:42:27 np0005603435 systemd: Finished Create System Users.
Jan 30 22:42:27 np0005603435 systemd-journald[307]: Journal started
Jan 30 22:42:27 np0005603435 systemd-journald[307]: Runtime Journal (/run/log/journal/e56e1981badb4c56a12dc458e4e6bca8) is 8.0M, max 153.6M, 145.6M free.
Jan 30 22:42:27 np0005603435 systemd-sysusers[312]: Creating group 'users' with GID 100.
Jan 30 22:42:27 np0005603435 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Jan 30 22:42:27 np0005603435 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 30 22:42:27 np0005603435 systemd: Started Journal Service.
Jan 30 22:42:27 np0005603435 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 30 22:42:27 np0005603435 systemd[1]: Starting Create Volatile Files and Directories...
Jan 30 22:42:27 np0005603435 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 30 22:42:27 np0005603435 systemd[1]: Finished Create Volatile Files and Directories.
Jan 30 22:42:27 np0005603435 systemd[1]: Finished Setup Virtual Console.
Jan 30 22:42:27 np0005603435 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 30 22:42:27 np0005603435 systemd[1]: Starting dracut cmdline hook...
Jan 30 22:42:27 np0005603435 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Jan 30 22:42:27 np0005603435 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 30 22:42:27 np0005603435 systemd[1]: Finished dracut cmdline hook.
Jan 30 22:42:27 np0005603435 systemd[1]: Starting dracut pre-udev hook...
Jan 30 22:42:27 np0005603435 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 30 22:42:27 np0005603435 kernel: device-mapper: uevent: version 1.0.3
Jan 30 22:42:27 np0005603435 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 30 22:42:27 np0005603435 kernel: RPC: Registered named UNIX socket transport module.
Jan 30 22:42:27 np0005603435 kernel: RPC: Registered udp transport module.
Jan 30 22:42:27 np0005603435 kernel: RPC: Registered tcp transport module.
Jan 30 22:42:27 np0005603435 kernel: RPC: Registered tcp-with-tls transport module.
Jan 30 22:42:27 np0005603435 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 30 22:42:27 np0005603435 rpc.statd[441]: Version 2.5.4 starting
Jan 30 22:42:27 np0005603435 rpc.statd[441]: Initializing NSM state
Jan 30 22:42:27 np0005603435 rpc.idmapd[446]: Setting log level to 0
Jan 30 22:42:27 np0005603435 systemd[1]: Finished dracut pre-udev hook.
Jan 30 22:42:27 np0005603435 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 30 22:42:27 np0005603435 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 30 22:42:27 np0005603435 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 30 22:42:27 np0005603435 systemd[1]: Starting dracut pre-trigger hook...
Jan 30 22:42:27 np0005603435 systemd[1]: Finished dracut pre-trigger hook.
Jan 30 22:42:27 np0005603435 systemd[1]: Starting Coldplug All udev Devices...
Jan 30 22:42:28 np0005603435 systemd[1]: Created slice Slice /system/modprobe.
Jan 30 22:42:28 np0005603435 systemd[1]: Starting Load Kernel Module configfs...
Jan 30 22:42:28 np0005603435 systemd[1]: Finished Coldplug All udev Devices.
Jan 30 22:42:28 np0005603435 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 30 22:42:28 np0005603435 systemd[1]: Finished Load Kernel Module configfs.
Jan 30 22:42:28 np0005603435 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 30 22:42:28 np0005603435 systemd[1]: Reached target Network.
Jan 30 22:42:28 np0005603435 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 30 22:42:28 np0005603435 systemd[1]: Starting dracut initqueue hook...
Jan 30 22:42:28 np0005603435 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 30 22:42:28 np0005603435 kernel: scsi host0: ata_piix
Jan 30 22:42:28 np0005603435 kernel: scsi host1: ata_piix
Jan 30 22:42:28 np0005603435 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 30 22:42:28 np0005603435 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 30 22:42:28 np0005603435 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 30 22:42:28 np0005603435 kernel: vda: vda1
Jan 30 22:42:28 np0005603435 systemd[1]: Mounting Kernel Configuration File System...
Jan 30 22:42:28 np0005603435 systemd[1]: Mounted Kernel Configuration File System.
Jan 30 22:42:28 np0005603435 systemd[1]: Reached target System Initialization.
Jan 30 22:42:28 np0005603435 systemd[1]: Reached target Basic System.
Jan 30 22:42:28 np0005603435 kernel: ata1: found unknown device (class 0)
Jan 30 22:42:28 np0005603435 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 30 22:42:28 np0005603435 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 30 22:42:28 np0005603435 systemd-udevd[480]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 22:42:28 np0005603435 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 30 22:42:28 np0005603435 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 30 22:42:28 np0005603435 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 30 22:42:28 np0005603435 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 30 22:42:28 np0005603435 systemd[1]: Reached target Initrd Root Device.
Jan 30 22:42:28 np0005603435 systemd[1]: Finished dracut initqueue hook.
Jan 30 22:42:28 np0005603435 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 30 22:42:28 np0005603435 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 30 22:42:28 np0005603435 systemd[1]: Reached target Remote File Systems.
Jan 30 22:42:28 np0005603435 systemd[1]: Starting dracut pre-mount hook...
Jan 30 22:42:28 np0005603435 systemd[1]: Finished dracut pre-mount hook.
Jan 30 22:42:28 np0005603435 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 30 22:42:28 np0005603435 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Jan 30 22:42:28 np0005603435 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 30 22:42:28 np0005603435 systemd[1]: Mounting /sysroot...
Jan 30 22:42:29 np0005603435 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 30 22:42:29 np0005603435 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 30 22:42:29 np0005603435 kernel: XFS (vda1): Ending clean mount
Jan 30 22:42:29 np0005603435 systemd[1]: Mounted /sysroot.
Jan 30 22:42:29 np0005603435 systemd[1]: Reached target Initrd Root File System.
Jan 30 22:42:29 np0005603435 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 30 22:42:29 np0005603435 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 30 22:42:29 np0005603435 systemd[1]: Reached target Initrd File Systems.
Jan 30 22:42:29 np0005603435 systemd[1]: Reached target Initrd Default Target.
Jan 30 22:42:29 np0005603435 systemd[1]: Starting dracut mount hook...
Jan 30 22:42:29 np0005603435 systemd[1]: Finished dracut mount hook.
Jan 30 22:42:29 np0005603435 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 30 22:42:29 np0005603435 rpc.idmapd[446]: exiting on signal 15
Jan 30 22:42:29 np0005603435 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 30 22:42:29 np0005603435 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Network.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Timer Units.
Jan 30 22:42:29 np0005603435 systemd[1]: dbus.socket: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 30 22:42:29 np0005603435 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Initrd Default Target.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Basic System.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Initrd Root Device.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Initrd /usr File System.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Path Units.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Remote File Systems.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Slice Units.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Socket Units.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target System Initialization.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Local File Systems.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Swaps.
Jan 30 22:42:29 np0005603435 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped dracut mount hook.
Jan 30 22:42:29 np0005603435 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped dracut pre-mount hook.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 30 22:42:29 np0005603435 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped dracut initqueue hook.
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Apply Kernel Variables.
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Coldplug All udev Devices.
Jan 30 22:42:29 np0005603435 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped dracut pre-trigger hook.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Setup Virtual Console.
Jan 30 22:42:29 np0005603435 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Closed udev Control Socket.
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Closed udev Kernel Socket.
Jan 30 22:42:29 np0005603435 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped dracut pre-udev hook.
Jan 30 22:42:29 np0005603435 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped dracut cmdline hook.
Jan 30 22:42:29 np0005603435 systemd[1]: Starting Cleanup udev Database...
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 30 22:42:29 np0005603435 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 30 22:42:29 np0005603435 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Stopped Create System Users.
Jan 30 22:42:29 np0005603435 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 30 22:42:29 np0005603435 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 30 22:42:29 np0005603435 systemd[1]: Finished Cleanup udev Database.
Jan 30 22:42:29 np0005603435 systemd[1]: Reached target Switch Root.
Jan 30 22:42:29 np0005603435 systemd[1]: Starting Switch Root...
Jan 30 22:42:29 np0005603435 systemd[1]: Switching root.
Jan 30 22:42:29 np0005603435 systemd-journald[307]: Journal stopped
Jan 30 22:42:30 np0005603435 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 30 22:42:30 np0005603435 kernel: audit: type=1404 audit(1769830949.575:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 30 22:42:30 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 22:42:30 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 22:42:30 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 22:42:30 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 22:42:30 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 22:42:30 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 22:42:30 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 22:42:30 np0005603435 kernel: audit: type=1403 audit(1769830949.709:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 30 22:42:30 np0005603435 systemd: Successfully loaded SELinux policy in 138.318ms.
Jan 30 22:42:30 np0005603435 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.512ms.
Jan 30 22:42:30 np0005603435 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 30 22:42:30 np0005603435 systemd: Detected virtualization kvm.
Jan 30 22:42:30 np0005603435 systemd: Detected architecture x86-64.
Jan 30 22:42:30 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 22:42:30 np0005603435 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 30 22:42:30 np0005603435 systemd: Stopped Switch Root.
Jan 30 22:42:30 np0005603435 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 30 22:42:30 np0005603435 systemd: Created slice Slice /system/getty.
Jan 30 22:42:30 np0005603435 systemd: Created slice Slice /system/serial-getty.
Jan 30 22:42:30 np0005603435 systemd: Created slice Slice /system/sshd-keygen.
Jan 30 22:42:30 np0005603435 systemd: Created slice User and Session Slice.
Jan 30 22:42:30 np0005603435 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 30 22:42:30 np0005603435 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 30 22:42:30 np0005603435 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 30 22:42:30 np0005603435 systemd: Reached target Local Encrypted Volumes.
Jan 30 22:42:30 np0005603435 systemd: Stopped target Switch Root.
Jan 30 22:42:30 np0005603435 systemd: Stopped target Initrd File Systems.
Jan 30 22:42:30 np0005603435 systemd: Stopped target Initrd Root File System.
Jan 30 22:42:30 np0005603435 systemd: Reached target Local Integrity Protected Volumes.
Jan 30 22:42:30 np0005603435 systemd: Reached target Path Units.
Jan 30 22:42:30 np0005603435 systemd: Reached target rpc_pipefs.target.
Jan 30 22:42:30 np0005603435 systemd: Reached target Slice Units.
Jan 30 22:42:30 np0005603435 systemd: Reached target Swaps.
Jan 30 22:42:30 np0005603435 systemd: Reached target Local Verity Protected Volumes.
Jan 30 22:42:30 np0005603435 systemd: Listening on RPCbind Server Activation Socket.
Jan 30 22:42:30 np0005603435 systemd: Reached target RPC Port Mapper.
Jan 30 22:42:30 np0005603435 systemd: Listening on Process Core Dump Socket.
Jan 30 22:42:30 np0005603435 systemd: Listening on initctl Compatibility Named Pipe.
Jan 30 22:42:30 np0005603435 systemd: Listening on udev Control Socket.
Jan 30 22:42:30 np0005603435 systemd: Listening on udev Kernel Socket.
Jan 30 22:42:30 np0005603435 systemd: Mounting Huge Pages File System...
Jan 30 22:42:30 np0005603435 systemd: Mounting POSIX Message Queue File System...
Jan 30 22:42:30 np0005603435 systemd: Mounting Kernel Debug File System...
Jan 30 22:42:30 np0005603435 systemd: Mounting Kernel Trace File System...
Jan 30 22:42:30 np0005603435 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 30 22:42:30 np0005603435 systemd: Starting Create List of Static Device Nodes...
Jan 30 22:42:30 np0005603435 systemd: Starting Load Kernel Module configfs...
Jan 30 22:42:30 np0005603435 systemd: Starting Load Kernel Module drm...
Jan 30 22:42:30 np0005603435 systemd: Starting Load Kernel Module efi_pstore...
Jan 30 22:42:30 np0005603435 systemd: Starting Load Kernel Module fuse...
Jan 30 22:42:30 np0005603435 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 30 22:42:30 np0005603435 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 30 22:42:30 np0005603435 systemd: Stopped File System Check on Root Device.
Jan 30 22:42:30 np0005603435 systemd: Stopped Journal Service.
Jan 30 22:42:30 np0005603435 systemd: Starting Journal Service...
Jan 30 22:42:30 np0005603435 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 30 22:42:30 np0005603435 systemd: Starting Generate network units from Kernel command line...
Jan 30 22:42:30 np0005603435 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 30 22:42:30 np0005603435 systemd: Starting Remount Root and Kernel File Systems...
Jan 30 22:42:30 np0005603435 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 30 22:42:30 np0005603435 systemd: Starting Apply Kernel Variables...
Jan 30 22:42:30 np0005603435 systemd: Starting Coldplug All udev Devices...
Jan 30 22:42:30 np0005603435 kernel: fuse: init (API version 7.37)
Jan 30 22:42:30 np0005603435 systemd: Mounted Huge Pages File System.
Jan 30 22:42:30 np0005603435 systemd: Mounted POSIX Message Queue File System.
Jan 30 22:42:30 np0005603435 systemd: Mounted Kernel Debug File System.
Jan 30 22:42:30 np0005603435 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 30 22:42:30 np0005603435 systemd: Mounted Kernel Trace File System.
Jan 30 22:42:30 np0005603435 systemd-journald[679]: Journal started
Jan 30 22:42:30 np0005603435 systemd-journald[679]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 30 22:42:30 np0005603435 systemd[1]: Queued start job for default target Multi-User System.
Jan 30 22:42:30 np0005603435 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 30 22:42:30 np0005603435 systemd: Started Journal Service.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Create List of Static Device Nodes.
Jan 30 22:42:30 np0005603435 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Load Kernel Module configfs.
Jan 30 22:42:30 np0005603435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 30 22:42:30 np0005603435 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Load Kernel Module fuse.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Generate network units from Kernel command line.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Apply Kernel Variables.
Jan 30 22:42:30 np0005603435 kernel: ACPI: bus type drm_connector registered
Jan 30 22:42:30 np0005603435 systemd[1]: Mounting FUSE Control File System...
Jan 30 22:42:30 np0005603435 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Rebuild Hardware Database...
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 30 22:42:30 np0005603435 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Load/Save OS Random Seed...
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Create System Users...
Jan 30 22:42:30 np0005603435 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Load Kernel Module drm.
Jan 30 22:42:30 np0005603435 systemd-journald[679]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 30 22:42:30 np0005603435 systemd-journald[679]: Received client request to flush runtime journal.
Jan 30 22:42:30 np0005603435 systemd[1]: Mounted FUSE Control File System.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Load/Save OS Random Seed.
Jan 30 22:42:30 np0005603435 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Coldplug All udev Devices.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Create System Users.
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 30 22:42:30 np0005603435 systemd[1]: Reached target Preparation for Local File Systems.
Jan 30 22:42:30 np0005603435 systemd[1]: Reached target Local File Systems.
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 30 22:42:30 np0005603435 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 30 22:42:30 np0005603435 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 30 22:42:30 np0005603435 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Automatic Boot Loader Update...
Jan 30 22:42:30 np0005603435 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Create Volatile Files and Directories...
Jan 30 22:42:30 np0005603435 bootctl[698]: Couldn't find EFI system partition, skipping.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Automatic Boot Loader Update.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Create Volatile Files and Directories.
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Security Auditing Service...
Jan 30 22:42:30 np0005603435 systemd[1]: Starting RPC Bind...
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Rebuild Journal Catalog...
Jan 30 22:42:30 np0005603435 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 30 22:42:30 np0005603435 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 30 22:42:30 np0005603435 systemd[1]: Started RPC Bind.
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Rebuild Journal Catalog.
Jan 30 22:42:30 np0005603435 augenrules[709]: /sbin/augenrules: No change
Jan 30 22:42:30 np0005603435 augenrules[725]: No rules
Jan 30 22:42:30 np0005603435 augenrules[725]: enabled 1
Jan 30 22:42:30 np0005603435 augenrules[725]: failure 1
Jan 30 22:42:30 np0005603435 augenrules[725]: pid 704
Jan 30 22:42:30 np0005603435 augenrules[725]: rate_limit 0
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_limit 8192
Jan 30 22:42:30 np0005603435 augenrules[725]: lost 0
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog 0
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_wait_time 60000
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_wait_time_actual 0
Jan 30 22:42:30 np0005603435 augenrules[725]: enabled 1
Jan 30 22:42:30 np0005603435 augenrules[725]: failure 1
Jan 30 22:42:30 np0005603435 augenrules[725]: pid 704
Jan 30 22:42:30 np0005603435 augenrules[725]: rate_limit 0
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_limit 8192
Jan 30 22:42:30 np0005603435 augenrules[725]: lost 0
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog 3
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_wait_time 60000
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_wait_time_actual 0
Jan 30 22:42:30 np0005603435 augenrules[725]: enabled 1
Jan 30 22:42:30 np0005603435 augenrules[725]: failure 1
Jan 30 22:42:30 np0005603435 augenrules[725]: pid 704
Jan 30 22:42:30 np0005603435 augenrules[725]: rate_limit 0
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_limit 8192
Jan 30 22:42:30 np0005603435 augenrules[725]: lost 0
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog 3
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_wait_time 60000
Jan 30 22:42:30 np0005603435 augenrules[725]: backlog_wait_time_actual 0
Jan 30 22:42:30 np0005603435 systemd[1]: Started Security Auditing Service.
Jan 30 22:42:30 np0005603435 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 30 22:42:30 np0005603435 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 30 22:42:31 np0005603435 systemd[1]: Finished Rebuild Hardware Database.
Jan 30 22:42:31 np0005603435 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 30 22:42:31 np0005603435 systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Jan 30 22:42:31 np0005603435 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 30 22:42:31 np0005603435 systemd[1]: Starting Load Kernel Module configfs...
Jan 30 22:42:31 np0005603435 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 30 22:42:31 np0005603435 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 30 22:42:31 np0005603435 systemd[1]: Starting Update is Completed...
Jan 30 22:42:31 np0005603435 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 30 22:42:31 np0005603435 systemd[1]: Finished Load Kernel Module configfs.
Jan 30 22:42:31 np0005603435 systemd[1]: Finished Update is Completed.
Jan 30 22:42:31 np0005603435 systemd[1]: Reached target System Initialization.
Jan 30 22:42:31 np0005603435 systemd[1]: Started dnf makecache --timer.
Jan 30 22:42:31 np0005603435 systemd[1]: Started Daily rotation of log files.
Jan 30 22:42:31 np0005603435 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 30 22:42:31 np0005603435 systemd[1]: Reached target Timer Units.
Jan 30 22:42:31 np0005603435 systemd-udevd[745]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 22:42:31 np0005603435 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 30 22:42:31 np0005603435 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 30 22:42:31 np0005603435 systemd[1]: Reached target Socket Units.
Jan 30 22:42:31 np0005603435 systemd[1]: Starting D-Bus System Message Bus...
Jan 30 22:42:31 np0005603435 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 30 22:42:31 np0005603435 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 30 22:42:31 np0005603435 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 30 22:42:31 np0005603435 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 30 22:42:31 np0005603435 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 30 22:42:31 np0005603435 systemd[1]: Started D-Bus System Message Bus.
Jan 30 22:42:31 np0005603435 systemd[1]: Reached target Basic System.
Jan 30 22:42:31 np0005603435 dbus-broker-lau[774]: Ready
Jan 30 22:42:31 np0005603435 systemd[1]: Starting NTP client/server...
Jan 30 22:42:31 np0005603435 kernel: kvm_amd: TSC scaling supported
Jan 30 22:42:31 np0005603435 kernel: kvm_amd: Nested Virtualization enabled
Jan 30 22:42:31 np0005603435 kernel: kvm_amd: Nested Paging enabled
Jan 30 22:42:31 np0005603435 kernel: kvm_amd: LBR virtualization supported
Jan 30 22:42:31 np0005603435 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 30 22:42:31 np0005603435 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 30 22:42:31 np0005603435 kernel: Console: switching to colour dummy device 80x25
Jan 30 22:42:31 np0005603435 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 30 22:42:31 np0005603435 kernel: [drm] features: -context_init
Jan 30 22:42:31 np0005603435 kernel: [drm] number of scanouts: 1
Jan 30 22:42:31 np0005603435 kernel: [drm] number of cap sets: 0
Jan 30 22:42:31 np0005603435 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 30 22:42:31 np0005603435 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 30 22:42:31 np0005603435 kernel: Console: switching to colour frame buffer device 128x48
Jan 30 22:42:31 np0005603435 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 30 22:42:31 np0005603435 chronyd[794]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 30 22:42:31 np0005603435 chronyd[794]: Loaded 0 symmetric keys
Jan 30 22:42:31 np0005603435 chronyd[794]: Using right/UTC timezone to obtain leap second data
Jan 30 22:42:31 np0005603435 chronyd[794]: Loaded seccomp filter (level 2)
Jan 30 22:42:31 np0005603435 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 30 22:42:31 np0005603435 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 30 22:42:31 np0005603435 systemd[1]: Starting IPv4 firewall with iptables...
Jan 30 22:42:31 np0005603435 systemd[1]: Started irqbalance daemon.
Jan 30 22:42:31 np0005603435 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 30 22:42:31 np0005603435 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 30 22:42:31 np0005603435 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 30 22:42:31 np0005603435 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 30 22:42:31 np0005603435 systemd[1]: Reached target sshd-keygen.target.
Jan 30 22:42:31 np0005603435 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 30 22:42:31 np0005603435 systemd[1]: Reached target User and Group Name Lookups.
Jan 30 22:42:31 np0005603435 systemd[1]: Starting User Login Management...
Jan 30 22:42:31 np0005603435 systemd[1]: Started NTP client/server.
Jan 30 22:42:31 np0005603435 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 30 22:42:31 np0005603435 systemd-logind[816]: New seat seat0.
Jan 30 22:42:31 np0005603435 systemd-logind[816]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 30 22:42:31 np0005603435 systemd-logind[816]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 30 22:42:31 np0005603435 systemd[1]: Started User Login Management.
Jan 30 22:42:31 np0005603435 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 30 22:42:31 np0005603435 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 30 22:42:31 np0005603435 iptables.init[800]: iptables: Applying firewall rules: [  OK  ]
Jan 30 22:42:31 np0005603435 systemd[1]: Finished IPv4 firewall with iptables.
Jan 30 22:42:32 np0005603435 cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 03:42:32 +0000. Up 6.52 seconds.
Jan 30 22:42:32 np0005603435 systemd[1]: run-cloud\x2dinit-tmp-tmpjnpwtctd.mount: Deactivated successfully.
Jan 30 22:42:32 np0005603435 systemd[1]: Starting Hostname Service...
Jan 30 22:42:32 np0005603435 systemd[1]: Started Hostname Service.
Jan 30 22:42:32 np0005603435 systemd-hostnamed[855]: Hostname set to <np0005603435.novalocal> (static)
Jan 30 22:42:32 np0005603435 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 30 22:42:32 np0005603435 systemd[1]: Reached target Preparation for Network.
Jan 30 22:42:32 np0005603435 systemd[1]: Starting Network Manager...
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.6798] NetworkManager (version 1.54.3-2.el9) is starting... (boot:c3572006-004a-46ef-8549-18148190fe4e)
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.6805] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.6995] manager[0x5632df150000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7049] hostname: hostname: using hostnamed
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7050] hostname: static hostname changed from (none) to "np0005603435.novalocal"
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7056] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7206] manager[0x5632df150000]: rfkill: Wi-Fi hardware radio set enabled
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7207] manager[0x5632df150000]: rfkill: WWAN hardware radio set enabled
Jan 30 22:42:32 np0005603435 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7339] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7340] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7341] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7342] manager: Networking is enabled by state file
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7346] settings: Loaded settings plugin: keyfile (internal)
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7387] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7424] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7445] dhcp: init: Using DHCP client 'internal'
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7451] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7476] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7495] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7513] device (lo): Activation: starting connection 'lo' (8771c680-8833-490d-9239-e3b4dbbb566f)
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7526] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7532] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7574] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7584] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 30 22:42:32 np0005603435 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7621] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 30 22:42:32 np0005603435 systemd[1]: Started Network Manager.
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7645] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 30 22:42:32 np0005603435 systemd[1]: Reached target Network.
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7673] device (eth0): carrier: link connected
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7681] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7692] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7702] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 30 22:42:32 np0005603435 systemd[1]: Starting Network Manager Wait Online...
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7710] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7711] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7716] manager: NetworkManager state is now CONNECTING
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7721] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7737] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7743] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 30 22:42:32 np0005603435 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 30 22:42:32 np0005603435 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7804] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7815] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7847] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7877] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7881] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7889] device (lo): Activation: successful, device activated.
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7920] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7923] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7929] manager: NetworkManager state is now CONNECTED_SITE
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7934] device (eth0): Activation: successful, device activated.
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7944] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 30 22:42:32 np0005603435 NetworkManager[859]: <info>  [1769830952.7950] manager: startup complete
Jan 30 22:42:32 np0005603435 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 30 22:42:32 np0005603435 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 30 22:42:32 np0005603435 systemd[1]: Reached target NFS client services.
Jan 30 22:42:32 np0005603435 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 30 22:42:32 np0005603435 systemd[1]: Reached target Remote File Systems.
Jan 30 22:42:32 np0005603435 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 30 22:42:32 np0005603435 systemd[1]: Finished Network Manager Wait Online.
Jan 30 22:42:32 np0005603435 systemd[1]: Starting Cloud-init: Network Stage...
Jan 30 22:42:33 np0005603435 cloud-init[924]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 03:42:33 +0000. Up 7.55 seconds.
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.94         | 255.255.255.0 | global | fa:16:3e:10:af:9b |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe10:af9b/64 |       .       |  link  | fa:16:3e:10:af:9b |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 30 22:42:33 np0005603435 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 30 22:42:34 np0005603435 cloud-init[924]: Generating public/private rsa key pair.
Jan 30 22:42:34 np0005603435 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 30 22:42:34 np0005603435 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 30 22:42:34 np0005603435 cloud-init[924]: The key fingerprint is:
Jan 30 22:42:34 np0005603435 cloud-init[924]: SHA256:jSdMpV2mIUEflj59SZ8uUyW/WbSQPpnO/lZkn3bzhbI root@np0005603435.novalocal
Jan 30 22:42:34 np0005603435 cloud-init[924]: The key's randomart image is:
Jan 30 22:42:34 np0005603435 cloud-init[924]: +---[RSA 3072]----+
Jan 30 22:42:34 np0005603435 cloud-init[924]: |       .+.=.o .  |
Jan 30 22:42:34 np0005603435 cloud-init[924]: |         B.* oo o|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |        o.+...+*+|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |       o oo .=o+*|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |        S o.o.++*|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |         o  .=.B=|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |            .o+ *|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |            E. ..|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |              o. |
Jan 30 22:42:34 np0005603435 cloud-init[924]: +----[SHA256]-----+
Jan 30 22:42:34 np0005603435 cloud-init[924]: Generating public/private ecdsa key pair.
Jan 30 22:42:34 np0005603435 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 30 22:42:34 np0005603435 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 30 22:42:34 np0005603435 cloud-init[924]: The key fingerprint is:
Jan 30 22:42:34 np0005603435 cloud-init[924]: SHA256:lt/Cwd7A13nSWFjf2QVjMNZHBD/uZxlBzIy1MwTD3oQ root@np0005603435.novalocal
Jan 30 22:42:34 np0005603435 cloud-init[924]: The key's randomart image is:
Jan 30 22:42:34 np0005603435 cloud-init[924]: +---[ECDSA 256]---+
Jan 30 22:42:34 np0005603435 cloud-init[924]: |            ==&O=|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |           . E=XO|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |            . +O*|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |         +   o.*=|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |        S = . =oo|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |       . + *  .oo|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |          = o  oo|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |           .   ..|
Jan 30 22:42:34 np0005603435 cloud-init[924]: |                 |
Jan 30 22:42:34 np0005603435 cloud-init[924]: +----[SHA256]-----+
Jan 30 22:42:34 np0005603435 cloud-init[924]: Generating public/private ed25519 key pair.
Jan 30 22:42:34 np0005603435 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 30 22:42:34 np0005603435 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 30 22:42:34 np0005603435 cloud-init[924]: The key fingerprint is:
Jan 30 22:42:34 np0005603435 cloud-init[924]: SHA256:mcCjYLu9lV/bD/h+2thCAmrivRKG3Qj3L/2pYc9hTjQ root@np0005603435.novalocal
Jan 30 22:42:34 np0005603435 cloud-init[924]: The key's randomart image is:
Jan 30 22:42:34 np0005603435 cloud-init[924]: +--[ED25519 256]--+
Jan 30 22:42:34 np0005603435 cloud-init[924]: |                 |
Jan 30 22:42:34 np0005603435 cloud-init[924]: |     .           |
Jan 30 22:42:34 np0005603435 cloud-init[924]: |  o   +          |
Jan 30 22:42:34 np0005603435 cloud-init[924]: | ..o.. o.o       |
Jan 30 22:42:34 np0005603435 cloud-init[924]: |  .=.+ .S.E      |
Jan 30 22:42:34 np0005603435 cloud-init[924]: |  .o* =. ..o.    |
Jan 30 22:42:34 np0005603435 cloud-init[924]: |  .o.=ooo *o.    |
Jan 30 22:42:34 np0005603435 cloud-init[924]: |    oooooO *.=.  |
Jan 30 22:42:34 np0005603435 cloud-init[924]: |    ...oooB.*=+  |
Jan 30 22:42:34 np0005603435 cloud-init[924]: +----[SHA256]-----+
Jan 30 22:42:34 np0005603435 systemd[1]: Finished Cloud-init: Network Stage.
Jan 30 22:42:34 np0005603435 systemd[1]: Reached target Cloud-config availability.
Jan 30 22:42:34 np0005603435 sm-notify[1006]: Version 2.5.4 starting
Jan 30 22:42:34 np0005603435 systemd[1]: Reached target Network is Online.
Jan 30 22:42:34 np0005603435 systemd[1]: Starting Cloud-init: Config Stage...
Jan 30 22:42:34 np0005603435 systemd[1]: Starting Crash recovery kernel arming...
Jan 30 22:42:34 np0005603435 systemd[1]: Starting Notify NFS peers of a restart...
Jan 30 22:42:34 np0005603435 systemd[1]: Starting System Logging Service...
Jan 30 22:42:34 np0005603435 systemd[1]: Starting OpenSSH server daemon...
Jan 30 22:42:34 np0005603435 systemd[1]: Starting Permit User Sessions...
Jan 30 22:42:34 np0005603435 systemd[1]: Started Notify NFS peers of a restart.
Jan 30 22:42:34 np0005603435 systemd[1]: Finished Permit User Sessions.
Jan 30 22:42:34 np0005603435 systemd[1]: Started Command Scheduler.
Jan 30 22:42:34 np0005603435 systemd[1]: Started Getty on tty1.
Jan 30 22:42:34 np0005603435 systemd[1]: Started Serial Getty on ttyS0.
Jan 30 22:42:34 np0005603435 systemd[1]: Reached target Login Prompts.
Jan 30 22:42:34 np0005603435 systemd[1]: Started OpenSSH server daemon.
Jan 30 22:42:34 np0005603435 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 30 22:42:34 np0005603435 systemd[1]: Started System Logging Service.
Jan 30 22:42:34 np0005603435 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 30 22:42:34 np0005603435 systemd[1]: Reached target Multi-User System.
Jan 30 22:42:34 np0005603435 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 30 22:42:34 np0005603435 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 30 22:42:34 np0005603435 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 30 22:42:34 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 22:42:34 np0005603435 kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Jan 30 22:42:34 np0005603435 kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 30 22:42:34 np0005603435 cloud-init[1109]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 03:42:34 +0000. Up 9.39 seconds.
Jan 30 22:42:35 np0005603435 systemd[1]: Finished Cloud-init: Config Stage.
Jan 30 22:42:35 np0005603435 systemd[1]: Starting Cloud-init: Final Stage...
Jan 30 22:42:35 np0005603435 dracut[1267]: dracut-057-102.git20250818.el9
Jan 30 22:42:35 np0005603435 cloud-init[1287]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 03:42:35 +0000. Up 9.77 seconds.
Jan 30 22:42:35 np0005603435 dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 30 22:42:35 np0005603435 cloud-init[1316]: #############################################################
Jan 30 22:42:35 np0005603435 cloud-init[1320]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 30 22:42:35 np0005603435 cloud-init[1330]: 256 SHA256:lt/Cwd7A13nSWFjf2QVjMNZHBD/uZxlBzIy1MwTD3oQ root@np0005603435.novalocal (ECDSA)
Jan 30 22:42:35 np0005603435 cloud-init[1337]: 256 SHA256:mcCjYLu9lV/bD/h+2thCAmrivRKG3Qj3L/2pYc9hTjQ root@np0005603435.novalocal (ED25519)
Jan 30 22:42:35 np0005603435 cloud-init[1342]: 3072 SHA256:jSdMpV2mIUEflj59SZ8uUyW/WbSQPpnO/lZkn3bzhbI root@np0005603435.novalocal (RSA)
Jan 30 22:42:35 np0005603435 cloud-init[1343]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 30 22:42:35 np0005603435 cloud-init[1347]: #############################################################
Jan 30 22:42:35 np0005603435 cloud-init[1287]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 03:42:35 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.94 seconds
Jan 30 22:42:35 np0005603435 systemd[1]: Finished Cloud-init: Final Stage.
Jan 30 22:42:35 np0005603435 systemd[1]: Reached target Cloud-init target.
Jan 30 22:42:35 np0005603435 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 30 22:42:35 np0005603435 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 30 22:42:35 np0005603435 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 30 22:42:35 np0005603435 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 30 22:42:35 np0005603435 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 30 22:42:35 np0005603435 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 30 22:42:35 np0005603435 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 30 22:42:35 np0005603435 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: memstrack is not available
Jan 30 22:42:36 np0005603435 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 30 22:42:36 np0005603435 dracut[1269]: memstrack is not available
Jan 30 22:42:36 np0005603435 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 30 22:42:37 np0005603435 dracut[1269]: *** Including module: systemd ***
Jan 30 22:42:37 np0005603435 dracut[1269]: *** Including module: fips ***
Jan 30 22:42:37 np0005603435 dracut[1269]: *** Including module: systemd-initrd ***
Jan 30 22:42:37 np0005603435 dracut[1269]: *** Including module: i18n ***
Jan 30 22:42:37 np0005603435 chronyd[794]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Jan 30 22:42:37 np0005603435 chronyd[794]: System clock TAI offset set to 37 seconds
Jan 30 22:42:37 np0005603435 dracut[1269]: *** Including module: drm ***
Jan 30 22:42:38 np0005603435 dracut[1269]: *** Including module: prefixdevname ***
Jan 30 22:42:38 np0005603435 dracut[1269]: *** Including module: kernel-modules ***
Jan 30 22:42:38 np0005603435 kernel: block vda: the capability attribute has been deprecated.
Jan 30 22:42:38 np0005603435 dracut[1269]: *** Including module: kernel-modules-extra ***
Jan 30 22:42:38 np0005603435 dracut[1269]: *** Including module: qemu ***
Jan 30 22:42:38 np0005603435 dracut[1269]: *** Including module: fstab-sys ***
Jan 30 22:42:38 np0005603435 dracut[1269]: *** Including module: rootfs-block ***
Jan 30 22:42:38 np0005603435 dracut[1269]: *** Including module: terminfo ***
Jan 30 22:42:38 np0005603435 dracut[1269]: *** Including module: udev-rules ***
Jan 30 22:42:39 np0005603435 dracut[1269]: Skipping udev rule: 91-permissions.rules
Jan 30 22:42:39 np0005603435 dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 30 22:42:39 np0005603435 dracut[1269]: *** Including module: virtiofs ***
Jan 30 22:42:39 np0005603435 dracut[1269]: *** Including module: dracut-systemd ***
Jan 30 22:42:39 np0005603435 dracut[1269]: *** Including module: usrmount ***
Jan 30 22:42:39 np0005603435 dracut[1269]: *** Including module: base ***
Jan 30 22:42:39 np0005603435 dracut[1269]: *** Including module: fs-lib ***
Jan 30 22:42:39 np0005603435 dracut[1269]: *** Including module: kdumpbase ***
Jan 30 22:42:40 np0005603435 dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 30 22:42:40 np0005603435 dracut[1269]:  microcode_ctl module: mangling fw_dir
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 30 22:42:40 np0005603435 dracut[1269]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 30 22:42:40 np0005603435 dracut[1269]: *** Including module: openssl ***
Jan 30 22:42:40 np0005603435 dracut[1269]: *** Including module: shutdown ***
Jan 30 22:42:40 np0005603435 dracut[1269]: *** Including module: squash ***
Jan 30 22:42:40 np0005603435 dracut[1269]: *** Including modules done ***
Jan 30 22:42:40 np0005603435 dracut[1269]: *** Installing kernel module dependencies ***
Jan 30 22:42:41 np0005603435 dracut[1269]: *** Installing kernel module dependencies done ***
Jan 30 22:42:41 np0005603435 dracut[1269]: *** Resolving executable dependencies ***
Jan 30 22:42:41 np0005603435 irqbalance[801]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 30 22:42:41 np0005603435 irqbalance[801]: IRQ 25 affinity is now unmanaged
Jan 30 22:42:41 np0005603435 irqbalance[801]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 30 22:42:41 np0005603435 irqbalance[801]: IRQ 31 affinity is now unmanaged
Jan 30 22:42:41 np0005603435 irqbalance[801]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 30 22:42:41 np0005603435 irqbalance[801]: IRQ 28 affinity is now unmanaged
Jan 30 22:42:41 np0005603435 irqbalance[801]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 30 22:42:41 np0005603435 irqbalance[801]: IRQ 32 affinity is now unmanaged
Jan 30 22:42:41 np0005603435 irqbalance[801]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 30 22:42:41 np0005603435 irqbalance[801]: IRQ 30 affinity is now unmanaged
Jan 30 22:42:41 np0005603435 irqbalance[801]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 30 22:42:41 np0005603435 irqbalance[801]: IRQ 29 affinity is now unmanaged
Jan 30 22:42:42 np0005603435 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 30 22:42:43 np0005603435 dracut[1269]: *** Resolving executable dependencies done ***
Jan 30 22:42:43 np0005603435 dracut[1269]: *** Generating early-microcode cpio image ***
Jan 30 22:42:43 np0005603435 dracut[1269]: *** Store current command line parameters ***
Jan 30 22:42:43 np0005603435 dracut[1269]: Stored kernel commandline:
Jan 30 22:42:43 np0005603435 dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Jan 30 22:42:43 np0005603435 dracut[1269]: *** Install squash loader ***
Jan 30 22:42:44 np0005603435 dracut[1269]: *** Squashing the files inside the initramfs ***
Jan 30 22:42:45 np0005603435 dracut[1269]: *** Squashing the files inside the initramfs done ***
Jan 30 22:42:45 np0005603435 dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 30 22:42:45 np0005603435 dracut[1269]: *** Hardlinking files ***
Jan 30 22:42:45 np0005603435 dracut[1269]: *** Hardlinking files done ***
Jan 30 22:42:45 np0005603435 dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 30 22:42:46 np0005603435 kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Jan 30 22:42:46 np0005603435 kdumpctl[1019]: kdump: Starting kdump: [OK]
Jan 30 22:42:46 np0005603435 systemd[1]: Finished Crash recovery kernel arming.
Jan 30 22:42:46 np0005603435 systemd[1]: Startup finished in 1.359s (kernel) + 2.658s (initrd) + 16.517s (userspace) = 20.535s.
Jan 30 22:42:57 np0005603435 systemd[1]: Created slice User Slice of UID 1000.
Jan 30 22:42:57 np0005603435 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 30 22:42:57 np0005603435 systemd-logind[816]: New session 1 of user zuul.
Jan 30 22:42:57 np0005603435 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 30 22:42:57 np0005603435 systemd[1]: Starting User Manager for UID 1000...
Jan 30 22:42:57 np0005603435 systemd[4311]: Queued start job for default target Main User Target.
Jan 30 22:42:57 np0005603435 systemd[4311]: Created slice User Application Slice.
Jan 30 22:42:57 np0005603435 systemd[4311]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 30 22:42:57 np0005603435 systemd[4311]: Started Daily Cleanup of User's Temporary Directories.
Jan 30 22:42:57 np0005603435 systemd[4311]: Reached target Paths.
Jan 30 22:42:57 np0005603435 systemd[4311]: Reached target Timers.
Jan 30 22:42:57 np0005603435 systemd[4311]: Starting D-Bus User Message Bus Socket...
Jan 30 22:42:57 np0005603435 systemd[4311]: Starting Create User's Volatile Files and Directories...
Jan 30 22:42:57 np0005603435 systemd[4311]: Finished Create User's Volatile Files and Directories.
Jan 30 22:42:57 np0005603435 systemd[4311]: Listening on D-Bus User Message Bus Socket.
Jan 30 22:42:57 np0005603435 systemd[4311]: Reached target Sockets.
Jan 30 22:42:57 np0005603435 systemd[4311]: Reached target Basic System.
Jan 30 22:42:57 np0005603435 systemd[1]: Started User Manager for UID 1000.
Jan 30 22:42:57 np0005603435 systemd[4311]: Reached target Main User Target.
Jan 30 22:42:57 np0005603435 systemd[4311]: Startup finished in 140ms.
Jan 30 22:42:57 np0005603435 systemd[1]: Started Session 1 of User zuul.
Jan 30 22:42:58 np0005603435 python3[4393]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 22:43:00 np0005603435 python3[4421]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 22:43:02 np0005603435 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 30 22:43:07 np0005603435 python3[4481]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 22:43:07 np0005603435 python3[4521]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 30 22:43:09 np0005603435 python3[4547]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+HzzOCY1XJYcq7v/zmrHU5p2YireCBW0qpNiom9+TcYxBnCz5G1M3Qqi3BnLsUDdYNMG3D6qrZP9BpNqgonXhHAUwON9kqHHpcLd5028BHHZvo2VyiDoC6ZygGMzw4dLIN7flOkjlZn4UAupn58Je70Lz0dcb8jyQInsmSReFrcWUkQpyV+AoyLdDwDNogIIamT1IFSJWrJaBB//FsOeVyLk9cpptW/wcKY/Ef2BuU4pJAkWR8HDQ/J1omazdV4N9bpqsBFip9fxYvYns+EBaLGXBgj8UAcmd5PZEysE1BraFr236b1rLAtMVRILu66t5K1eKv+CXMx5DGpgV+OctJz0cH9uecIm0T2P5PDcX0otq+eBYamGYjZikjgJPCAbUG1bl+TvCvJ4oFguzBCSfg+3jjYJ798Pqmj3y4ZNm6DBoKMTR0OWXaHWfLL1R9SJNnqR56h5oEjz4mO9FziNUo3oMAdMFc0sHOg1d1nfsXx5tBaeWaBMimL1DBiBmJA8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:10 np0005603435 python3[4571]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:10 np0005603435 python3[4670]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:43:11 np0005603435 python3[4741]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769830990.4976668-207-93341291652994/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=361ad19aa1ea4bffacf9dc9260686291_id_rsa follow=False checksum=131590d5662aa8a2f04e1148978bd1c924957356 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:11 np0005603435 python3[4864]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:43:12 np0005603435 python3[4935]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769830991.4918969-240-20595414042470/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=361ad19aa1ea4bffacf9dc9260686291_id_rsa.pub follow=False checksum=f63eefd4be27b62463bcee6f37ad9e643fe61b65 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:13 np0005603435 python3[4983]: ansible-ping Invoked with data=pong
Jan 30 22:43:14 np0005603435 python3[5007]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 22:43:16 np0005603435 python3[5065]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 30 22:43:17 np0005603435 python3[5097]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:17 np0005603435 python3[5121]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:17 np0005603435 python3[5145]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:18 np0005603435 python3[5169]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:18 np0005603435 python3[5193]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:18 np0005603435 python3[5217]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:20 np0005603435 python3[5243]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:20 np0005603435 python3[5321]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:43:21 np0005603435 python3[5394]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769831000.4284127-21-135636067793539/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:21 np0005603435 python3[5442]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:22 np0005603435 python3[5466]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:22 np0005603435 python3[5490]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:22 np0005603435 python3[5514]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:22 np0005603435 python3[5538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:23 np0005603435 python3[5562]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:23 np0005603435 python3[5586]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:23 np0005603435 python3[5610]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:24 np0005603435 python3[5634]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:24 np0005603435 python3[5658]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:24 np0005603435 python3[5682]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:24 np0005603435 python3[5706]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:25 np0005603435 python3[5730]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:25 np0005603435 python3[5754]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:25 np0005603435 python3[5778]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:25 np0005603435 python3[5802]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:26 np0005603435 python3[5826]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:26 np0005603435 python3[5850]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:26 np0005603435 python3[5874]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:27 np0005603435 python3[5898]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:27 np0005603435 python3[5922]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:27 np0005603435 python3[5946]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:27 np0005603435 python3[5970]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:28 np0005603435 python3[5994]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:28 np0005603435 python3[6018]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:28 np0005603435 python3[6042]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:43:31 np0005603435 irqbalance[801]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 30 22:43:31 np0005603435 irqbalance[801]: IRQ 26 affinity is now unmanaged
Jan 30 22:43:31 np0005603435 python3[6068]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 30 22:43:31 np0005603435 systemd[1]: Starting Time & Date Service...
Jan 30 22:43:31 np0005603435 systemd[1]: Started Time & Date Service.
Jan 30 22:43:31 np0005603435 systemd-timedated[6070]: Changed time zone to 'UTC' (UTC).
Jan 30 22:43:32 np0005603435 python3[6099]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:32 np0005603435 python3[6175]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:43:33 np0005603435 python3[6246]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769831012.5111322-153-91262838724346/source _original_basename=tmp6_ry9tam follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:33 np0005603435 python3[6346]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:43:34 np0005603435 python3[6417]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769831013.4791152-183-43389218245436/source _original_basename=tmp6gqa1_pw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:34 np0005603435 python3[6519]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:43:35 np0005603435 python3[6592]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769831014.6613455-231-82957618866969/source _original_basename=tmpx5ifqb5m follow=False checksum=de28d19618025176a7a65eba0e40c742fe7af9f4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:35 np0005603435 python3[6640]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:43:36 np0005603435 python3[6666]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:43:36 np0005603435 python3[6746]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:43:37 np0005603435 python3[6819]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769831016.4821463-273-178670523700327/source _original_basename=tmp0eqraowv follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:37 np0005603435 python3[6870]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-b4e6-e9fc-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:43:38 np0005603435 python3[6898]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-b4e6-e9fc-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 30 22:43:39 np0005603435 python3[6927]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:43:57 np0005603435 python3[6953]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:44:01 np0005603435 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 30 22:44:32 np0005603435 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 30 22:44:32 np0005603435 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3059] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 30 22:44:32 np0005603435 systemd-udevd[6956]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3282] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3307] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3309] device (eth1): carrier: link connected
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3311] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3316] policy: auto-activating connection 'Wired connection 1' (38dcc20a-970f-3b4f-84d1-230174caf167)
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3321] device (eth1): Activation: starting connection 'Wired connection 1' (38dcc20a-970f-3b4f-84d1-230174caf167)
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3322] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3325] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3329] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 22:44:32 np0005603435 NetworkManager[859]: <info>  [1769831072.3333] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 30 22:44:32 np0005603435 python3[6983]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-318f-93e3-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:44:42 np0005603435 python3[7063]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:44:43 np0005603435 python3[7136]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769831082.6633353-102-61648863206295/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=42f1cdaa45bcffc09060ecdb500c1b950acc5216 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:44:44 np0005603435 python3[7186]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 22:44:44 np0005603435 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 30 22:44:44 np0005603435 systemd[1]: Stopped Network Manager Wait Online.
Jan 30 22:44:44 np0005603435 systemd[1]: Stopping Network Manager Wait Online...
Jan 30 22:44:44 np0005603435 NetworkManager[859]: <info>  [1769831084.2117] caught SIGTERM, shutting down normally.
Jan 30 22:44:44 np0005603435 systemd[1]: Stopping Network Manager...
Jan 30 22:44:44 np0005603435 NetworkManager[859]: <info>  [1769831084.2127] dhcp4 (eth0): canceled DHCP transaction
Jan 30 22:44:44 np0005603435 NetworkManager[859]: <info>  [1769831084.2127] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 30 22:44:44 np0005603435 NetworkManager[859]: <info>  [1769831084.2127] dhcp4 (eth0): state changed no lease
Jan 30 22:44:44 np0005603435 NetworkManager[859]: <info>  [1769831084.2130] manager: NetworkManager state is now CONNECTING
Jan 30 22:44:44 np0005603435 NetworkManager[859]: <info>  [1769831084.2322] dhcp4 (eth1): canceled DHCP transaction
Jan 30 22:44:44 np0005603435 NetworkManager[859]: <info>  [1769831084.2323] dhcp4 (eth1): state changed no lease
Jan 30 22:44:44 np0005603435 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 30 22:44:44 np0005603435 NetworkManager[859]: <info>  [1769831084.2369] exiting (success)
Jan 30 22:44:44 np0005603435 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 30 22:44:44 np0005603435 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 30 22:44:44 np0005603435 systemd[1]: Stopped Network Manager.
Jan 30 22:44:44 np0005603435 systemd[1]: NetworkManager.service: Consumed 1.051s CPU time, 10.0M memory peak.
Jan 30 22:44:44 np0005603435 systemd[1]: Starting Network Manager...
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.3006] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:c3572006-004a-46ef-8549-18148190fe4e)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.3010] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.3078] manager[0x55c5a0c84000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 30 22:44:44 np0005603435 systemd[1]: Starting Hostname Service...
Jan 30 22:44:44 np0005603435 systemd[1]: Started Hostname Service.
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.3981] hostname: hostname: using hostnamed
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.3985] hostname: static hostname changed from (none) to "np0005603435.novalocal"
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.3992] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.3999] manager[0x55c5a0c84000]: rfkill: Wi-Fi hardware radio set enabled
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4000] manager[0x55c5a0c84000]: rfkill: WWAN hardware radio set enabled
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4044] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4045] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4046] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4047] manager: Networking is enabled by state file
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4051] settings: Loaded settings plugin: keyfile (internal)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4057] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4095] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4110] dhcp: init: Using DHCP client 'internal'
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4115] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4122] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4132] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4146] device (lo): Activation: starting connection 'lo' (8771c680-8833-490d-9239-e3b4dbbb566f)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4158] device (eth0): carrier: link connected
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4165] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4175] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4176] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4188] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4200] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4210] device (eth1): carrier: link connected
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4218] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4227] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (38dcc20a-970f-3b4f-84d1-230174caf167) (indicated)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4229] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4237] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4249] device (eth1): Activation: starting connection 'Wired connection 1' (38dcc20a-970f-3b4f-84d1-230174caf167)
Jan 30 22:44:44 np0005603435 systemd[1]: Started Network Manager.
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4259] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4267] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4273] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4277] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4282] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4287] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4293] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4298] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4304] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4317] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4323] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4338] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4344] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4363] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4368] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4375] device (lo): Activation: successful, device activated.
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4387] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4399] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 30 22:44:44 np0005603435 systemd[1]: Starting Network Manager Wait Online...
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4469] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4501] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4503] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4507] manager: NetworkManager state is now CONNECTED_SITE
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4510] device (eth0): Activation: successful, device activated.
Jan 30 22:44:44 np0005603435 NetworkManager[7198]: <info>  [1769831084.4518] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 30 22:44:44 np0005603435 python3[7270]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-318f-93e3-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:44:54 np0005603435 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 30 22:45:14 np0005603435 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 30 22:45:18 np0005603435 systemd[4311]: Starting Mark boot as successful...
Jan 30 22:45:18 np0005603435 systemd[4311]: Finished Mark boot as successful.
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5443] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 30 22:45:29 np0005603435 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 30 22:45:29 np0005603435 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5783] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5787] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5794] device (eth1): Activation: successful, device activated.
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5804] manager: startup complete
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5807] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <warn>  [1769831129.5814] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5827] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 30 22:45:29 np0005603435 systemd[1]: Finished Network Manager Wait Online.
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5983] dhcp4 (eth1): canceled DHCP transaction
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5983] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.5983] dhcp4 (eth1): state changed no lease
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6004] policy: auto-activating connection 'ci-private-network' (037fc6a4-a42b-56fb-be9d-3251f9098a4b)
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6011] device (eth1): Activation: starting connection 'ci-private-network' (037fc6a4-a42b-56fb-be9d-3251f9098a4b)
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6013] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6016] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6024] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6035] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6076] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6079] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 22:45:29 np0005603435 NetworkManager[7198]: <info>  [1769831129.6087] device (eth1): Activation: successful, device activated.
Jan 30 22:45:39 np0005603435 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 30 22:45:44 np0005603435 systemd-logind[816]: Session 1 logged out. Waiting for processes to exit.
Jan 30 22:45:47 np0005603435 systemd-logind[816]: New session 3 of user zuul.
Jan 30 22:45:47 np0005603435 systemd[1]: Started Session 3 of User zuul.
Jan 30 22:45:47 np0005603435 python3[7380]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:45:47 np0005603435 python3[7453]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769831147.125491-267-176786268032963/source _original_basename=tmp32vm8x9l follow=False checksum=a2c54071fc3a3520bccba0886e46b5285312a04f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:45:49 np0005603435 systemd[1]: session-3.scope: Deactivated successfully.
Jan 30 22:45:49 np0005603435 systemd-logind[816]: Session 3 logged out. Waiting for processes to exit.
Jan 30 22:45:49 np0005603435 systemd-logind[816]: Removed session 3.
Jan 30 22:48:18 np0005603435 systemd[4311]: Created slice User Background Tasks Slice.
Jan 30 22:48:18 np0005603435 systemd[4311]: Starting Cleanup of User's Temporary Files and Directories...
Jan 30 22:48:18 np0005603435 systemd[4311]: Finished Cleanup of User's Temporary Files and Directories.
Jan 30 22:53:19 np0005603435 systemd-logind[816]: New session 4 of user zuul.
Jan 30 22:53:19 np0005603435 systemd[1]: Started Session 4 of User zuul.
Jan 30 22:53:19 np0005603435 python3[7516]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-879a-e558-00000000216d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:53:20 np0005603435 python3[7545]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:53:20 np0005603435 python3[7571]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:53:20 np0005603435 python3[7597]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:53:20 np0005603435 python3[7623]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:53:21 np0005603435 python3[7649]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:53:21 np0005603435 python3[7727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:53:22 np0005603435 python3[7800]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769831601.7368362-498-85328202344278/source _original_basename=tmpqc00ik7n follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:53:23 np0005603435 python3[7850]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 30 22:53:23 np0005603435 systemd[1]: Reloading.
Jan 30 22:53:23 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 22:53:24 np0005603435 python3[7905]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 30 22:53:25 np0005603435 python3[7931]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:53:25 np0005603435 python3[7959]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:53:25 np0005603435 python3[7987]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:53:26 np0005603435 python3[8015]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:53:26 np0005603435 python3[8042]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-879a-e558-000000002174-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:53:27 np0005603435 python3[8072]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 22:53:29 np0005603435 systemd[1]: session-4.scope: Deactivated successfully.
Jan 30 22:53:29 np0005603435 systemd[1]: session-4.scope: Consumed 3.686s CPU time.
Jan 30 22:53:29 np0005603435 systemd-logind[816]: Session 4 logged out. Waiting for processes to exit.
Jan 30 22:53:29 np0005603435 systemd-logind[816]: Removed session 4.
Jan 30 22:53:30 np0005603435 systemd-logind[816]: New session 5 of user zuul.
Jan 30 22:53:30 np0005603435 systemd[1]: Started Session 5 of User zuul.
Jan 30 22:53:31 np0005603435 python3[8107]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 30 22:53:39 np0005603435 setsebool[8151]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 30 22:53:39 np0005603435 setsebool[8151]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 30 22:53:51 np0005603435 kernel: SELinux:  Converting 385 SID table entries...
Jan 30 22:53:51 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 22:53:51 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 22:53:51 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 22:53:51 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 22:53:51 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 22:53:51 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 22:53:51 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 22:54:00 np0005603435 kernel: SELinux:  Converting 388 SID table entries...
Jan 30 22:54:00 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 22:54:00 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 22:54:00 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 22:54:00 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 22:54:00 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 22:54:00 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 22:54:00 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 22:54:17 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 30 22:54:18 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 22:54:18 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 22:54:18 np0005603435 systemd[1]: Reloading.
Jan 30 22:54:18 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 22:54:18 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 22:54:19 np0005603435 python3[10336]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-5e8b-4782-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 22:54:20 np0005603435 kernel: evm: overlay not supported
Jan 30 22:54:20 np0005603435 systemd[4311]: Starting D-Bus User Message Bus...
Jan 30 22:54:20 np0005603435 dbus-broker-launch[11566]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 30 22:54:20 np0005603435 dbus-broker-launch[11566]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 30 22:54:20 np0005603435 systemd[4311]: Started D-Bus User Message Bus.
Jan 30 22:54:20 np0005603435 dbus-broker-lau[11566]: Ready
Jan 30 22:54:20 np0005603435 systemd[4311]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 30 22:54:20 np0005603435 systemd[4311]: Created slice Slice /user.
Jan 30 22:54:20 np0005603435 systemd[4311]: podman-11424.scope: unit configures an IP firewall, but not running as root.
Jan 30 22:54:20 np0005603435 systemd[4311]: (This warning is only shown for the first unit using IP firewalling.)
Jan 30 22:54:20 np0005603435 systemd[4311]: Started podman-11424.scope.
Jan 30 22:54:20 np0005603435 systemd[4311]: Started podman-pause-5b92d2c2.scope.
Jan 30 22:54:21 np0005603435 python3[12126]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.36:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.36:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:54:21 np0005603435 python3[12126]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 30 22:54:21 np0005603435 systemd[1]: session-5.scope: Deactivated successfully.
Jan 30 22:54:21 np0005603435 systemd[1]: session-5.scope: Consumed 43.315s CPU time.
Jan 30 22:54:21 np0005603435 systemd-logind[816]: Session 5 logged out. Waiting for processes to exit.
Jan 30 22:54:21 np0005603435 systemd-logind[816]: Removed session 5.
Jan 30 22:54:41 np0005603435 irqbalance[801]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 30 22:54:41 np0005603435 irqbalance[801]: IRQ 27 affinity is now unmanaged
Jan 30 22:54:44 np0005603435 systemd-logind[816]: New session 6 of user zuul.
Jan 30 22:54:44 np0005603435 systemd[1]: Started Session 6 of User zuul.
Jan 30 22:54:44 np0005603435 python3[22048]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjcvrdk1Q1eDS+8ouKYDB4pP3ri49nhGsbTbVEpFCgZN1Z/nS1m9maMYNeHWKyR9JSkl9GGyeyzK06kxSNaorI= zuul@np0005603434.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:54:44 np0005603435 python3[22204]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjcvrdk1Q1eDS+8ouKYDB4pP3ri49nhGsbTbVEpFCgZN1Z/nS1m9maMYNeHWKyR9JSkl9GGyeyzK06kxSNaorI= zuul@np0005603434.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:54:45 np0005603435 python3[22533]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603435.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 30 22:54:46 np0005603435 python3[22734]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjcvrdk1Q1eDS+8ouKYDB4pP3ri49nhGsbTbVEpFCgZN1Z/nS1m9maMYNeHWKyR9JSkl9GGyeyzK06kxSNaorI= zuul@np0005603434.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 30 22:54:46 np0005603435 python3[23025]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:54:46 np0005603435 python3[23258]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769831686.17855-135-18577606867917/source _original_basename=tmpgo9gn6sx follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:54:47 np0005603435 python3[23522]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 30 22:54:47 np0005603435 systemd[1]: Starting Hostname Service...
Jan 30 22:54:47 np0005603435 systemd[1]: Started Hostname Service.
Jan 30 22:54:47 np0005603435 systemd-hostnamed[23621]: Changed pretty hostname to 'compute-0'
Jan 30 22:54:47 np0005603435 systemd-hostnamed[23621]: Hostname set to <compute-0> (static)
Jan 30 22:54:47 np0005603435 NetworkManager[7198]: <info>  [1769831687.8695] hostname: static hostname changed from "np0005603435.novalocal" to "compute-0"
Jan 30 22:54:47 np0005603435 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 30 22:54:47 np0005603435 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 30 22:54:48 np0005603435 systemd[1]: session-6.scope: Deactivated successfully.
Jan 30 22:54:48 np0005603435 systemd[1]: session-6.scope: Consumed 2.384s CPU time.
Jan 30 22:54:48 np0005603435 systemd-logind[816]: Session 6 logged out. Waiting for processes to exit.
Jan 30 22:54:48 np0005603435 systemd-logind[816]: Removed session 6.
Jan 30 22:54:57 np0005603435 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 30 22:55:04 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 22:55:04 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 22:55:04 np0005603435 systemd[1]: man-db-cache-update.service: Consumed 55.179s CPU time.
Jan 30 22:55:04 np0005603435 systemd[1]: run-rc45034bbe27c4b2499f792ab78c0bc75.service: Deactivated successfully.
Jan 30 22:55:17 np0005603435 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 30 22:58:05 np0005603435 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 30 22:58:05 np0005603435 systemd-logind[816]: New session 7 of user zuul.
Jan 30 22:58:05 np0005603435 systemd[1]: Started Session 7 of User zuul.
Jan 30 22:58:05 np0005603435 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 30 22:58:05 np0005603435 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 30 22:58:05 np0005603435 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 30 22:58:05 np0005603435 python3[30077]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 22:58:06 np0005603435 python3[30193]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:58:07 np0005603435 python3[30266]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769831886.724648-33802-95104867582533/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:58:07 np0005603435 python3[30292]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:58:08 np0005603435 python3[30365]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769831886.724648-33802-95104867582533/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:58:08 np0005603435 python3[30391]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:58:08 np0005603435 python3[30464]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769831886.724648-33802-95104867582533/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:58:08 np0005603435 python3[30490]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:58:09 np0005603435 python3[30563]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769831886.724648-33802-95104867582533/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:58:09 np0005603435 python3[30589]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:58:09 np0005603435 python3[30662]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769831886.724648-33802-95104867582533/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:58:10 np0005603435 python3[30688]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:58:10 np0005603435 python3[30761]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769831886.724648-33802-95104867582533/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:58:10 np0005603435 python3[30787]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 22:58:11 np0005603435 python3[30860]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769831886.724648-33802-95104867582533/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 22:58:21 np0005603435 python3[30918]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:03:18 np0005603435 systemd[1]: Starting dnf makecache...
Jan 30 23:03:18 np0005603435 dnf[30941]: Failed determining last makecache time.
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-openstack-barbican-42b4c41831408a8e323 232 kB/s |  13 kB     00:00
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-python-glean-642fffe0203a8ffcc2443db52 1.7 MB/s |  65 kB     00:00
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-openstack-cinder-1c00d6490d88e436f26ef 972 kB/s |  32 kB     00:00
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-python-stevedore-c4acc5639fd2329372142 3.8 MB/s | 131 kB     00:00
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-python-cloudkitty-tests-tempest-783703 661 kB/s |  32 kB     00:00
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-diskimage-builder-61b717cc45660834fe9a 8.5 MB/s | 349 kB     00:00
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-openstack-nova-eaa65f0b85123a4ee343246 1.1 MB/s |  42 kB     00:00
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-python-designate-tests-tempest-347fdbc 679 kB/s |  18 kB     00:00
Jan 30 23:03:18 np0005603435 dnf[30941]: delorean-openstack-glance-1fd12c29b339f30fe823e 682 kB/s |  18 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 735 kB/s |  29 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-openstack-manila-d783d10e75495b73866db 630 kB/s |  25 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-openstack-neutron-95cadbd379667c8520c8 4.5 MB/s | 154 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-openstack-octavia-5975097dd4b021385178 783 kB/s |  26 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-openstack-watcher-c014f81a8647287f6dcc 708 kB/s |  16 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-python-tcib-78032d201b02cee27e8e644c61 295 kB/s | 7.4 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 5.3 MB/s | 144 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-openstack-swift-dc98a8463506ac520c469a 548 kB/s |  14 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-python-tempestconf-8515371b7cceebd4282 2.2 MB/s |  53 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: delorean-openstack-heat-ui-013accbfd179753bc3f0 4.2 MB/s |  96 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: CentOS Stream 9 - BaseOS                         60 kB/s | 6.4 kB     00:00
Jan 30 23:03:19 np0005603435 dnf[30941]: CentOS Stream 9 - AppStream                      62 kB/s | 6.5 kB     00:00
Jan 30 23:03:20 np0005603435 dnf[30941]: CentOS Stream 9 - CRB                            26 kB/s | 6.3 kB     00:00
Jan 30 23:03:20 np0005603435 dnf[30941]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 30 23:03:20 np0005603435 dnf[30941]: dlrn-antelope-testing                            24 MB/s | 1.1 MB     00:00
Jan 30 23:03:20 np0005603435 systemd[1]: session-7.scope: Deactivated successfully.
Jan 30 23:03:20 np0005603435 systemd[1]: session-7.scope: Consumed 4.963s CPU time.
Jan 30 23:03:20 np0005603435 systemd-logind[816]: Session 7 logged out. Waiting for processes to exit.
Jan 30 23:03:20 np0005603435 systemd-logind[816]: Removed session 7.
Jan 30 23:03:20 np0005603435 dnf[30941]: dlrn-antelope-build-deps                         15 MB/s | 461 kB     00:00
Jan 30 23:03:21 np0005603435 dnf[30941]: centos9-rabbitmq                                400 kB/s | 123 kB     00:00
Jan 30 23:03:21 np0005603435 dnf[30941]: centos9-storage                                  19 MB/s | 415 kB     00:00
Jan 30 23:03:21 np0005603435 dnf[30941]: centos9-opstools                                4.7 MB/s |  51 kB     00:00
Jan 30 23:03:21 np0005603435 dnf[30941]: NFV SIG OpenvSwitch                              23 MB/s | 461 kB     00:00
Jan 30 23:03:22 np0005603435 dnf[30941]: repo-setup-centos-appstream                      91 MB/s |  26 MB     00:00
Jan 30 23:03:27 np0005603435 dnf[30941]: repo-setup-centos-baseos                         81 MB/s | 8.9 MB     00:00
Jan 30 23:03:29 np0005603435 dnf[30941]: repo-setup-centos-highavailability               29 MB/s | 744 kB     00:00
Jan 30 23:03:29 np0005603435 dnf[30941]: repo-setup-centos-powertools                     82 MB/s | 7.6 MB     00:00
Jan 30 23:03:32 np0005603435 dnf[30941]: Extra Packages for Enterprise Linux 9 - x86_64   16 MB/s |  20 MB     00:01
Jan 30 23:03:44 np0005603435 dnf[30941]: Metadata cache created.
Jan 30 23:03:44 np0005603435 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 30 23:03:44 np0005603435 systemd[1]: Finished dnf makecache.
Jan 30 23:03:44 np0005603435 systemd[1]: dnf-makecache.service: Consumed 23.585s CPU time.
Jan 30 23:09:42 np0005603435 systemd-logind[816]: New session 8 of user zuul.
Jan 30 23:09:42 np0005603435 systemd[1]: Started Session 8 of User zuul.
Jan 30 23:09:43 np0005603435 python3.9[31204]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:09:44 np0005603435 python3.9[31385]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:09:51 np0005603435 systemd[1]: session-8.scope: Deactivated successfully.
Jan 30 23:09:51 np0005603435 systemd[1]: session-8.scope: Consumed 7.604s CPU time.
Jan 30 23:09:51 np0005603435 systemd-logind[816]: Session 8 logged out. Waiting for processes to exit.
Jan 30 23:09:51 np0005603435 systemd-logind[816]: Removed session 8.
Jan 30 23:10:07 np0005603435 systemd-logind[816]: New session 9 of user zuul.
Jan 30 23:10:07 np0005603435 systemd[1]: Started Session 9 of User zuul.
Jan 30 23:10:08 np0005603435 python3.9[31595]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 30 23:10:09 np0005603435 python3.9[31769]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:10:10 np0005603435 python3.9[31921]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:10:11 np0005603435 python3.9[32074]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:10:11 np0005603435 python3.9[32226]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:10:12 np0005603435 python3.9[32378]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:10:13 np0005603435 python3.9[32501]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832612.0894198-68-84104117276437/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:10:14 np0005603435 python3.9[32653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:10:14 np0005603435 python3.9[32809]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:10:15 np0005603435 python3.9[32961]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:10:16 np0005603435 python3.9[33111]: ansible-ansible.builtin.service_facts Invoked
Jan 30 23:10:19 np0005603435 python3.9[33364]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:10:20 np0005603435 python3.9[33514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:10:21 np0005603435 python3.9[33668]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:10:22 np0005603435 python3.9[33826]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:10:23 np0005603435 python3.9[33910]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:11:06 np0005603435 systemd[1]: Reloading.
Jan 30 23:11:06 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:11:06 np0005603435 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 30 23:11:06 np0005603435 systemd[1]: Reloading.
Jan 30 23:11:06 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:11:06 np0005603435 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 30 23:11:06 np0005603435 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 30 23:11:06 np0005603435 systemd[1]: Reloading.
Jan 30 23:11:07 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:11:07 np0005603435 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 30 23:11:07 np0005603435 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Jan 30 23:11:07 np0005603435 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Jan 30 23:11:07 np0005603435 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Jan 30 23:12:10 np0005603435 kernel: SELinux:  Converting 2728 SID table entries...
Jan 30 23:12:10 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 23:12:10 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 23:12:10 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 23:12:10 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 23:12:10 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 23:12:10 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 23:12:10 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 23:12:10 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 30 23:12:10 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:12:10 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:12:10 np0005603435 systemd[1]: Reloading.
Jan 30 23:12:10 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:12:10 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 23:12:11 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:12:11 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:12:11 np0005603435 systemd[1]: run-r092554b75fe74eea9ce18f919ef458c9.service: Deactivated successfully.
Jan 30 23:12:11 np0005603435 python3.9[35429]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:12:13 np0005603435 python3.9[35710]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 30 23:12:14 np0005603435 python3.9[35862]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 30 23:12:17 np0005603435 python3.9[36015]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:12:18 np0005603435 python3.9[36167]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 30 23:12:19 np0005603435 python3.9[36319]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:12:20 np0005603435 python3.9[36471]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:12:20 np0005603435 python3.9[36594]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832739.6481686-231-159101279877058/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0b7af9532dee36953ea3073b7d033057885ae476 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:12:24 np0005603435 python3.9[36746]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:12:25 np0005603435 python3.9[36901]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:12:25 np0005603435 python3.9[37054]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:12:26 np0005603435 python3.9[37206]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 30 23:12:26 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:12:27 np0005603435 python3.9[37360]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 30 23:12:29 np0005603435 python3.9[37518]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 30 23:12:29 np0005603435 python3.9[37678]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 30 23:12:30 np0005603435 python3.9[37831]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 30 23:12:31 np0005603435 python3.9[37989]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 30 23:12:32 np0005603435 python3.9[38141]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:12:34 np0005603435 python3.9[38294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:12:35 np0005603435 python3.9[38446]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:12:35 np0005603435 python3.9[38569]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769832754.6430645-350-136156206951081/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:12:36 np0005603435 python3.9[38721]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:12:36 np0005603435 systemd[1]: Starting Load Kernel Modules...
Jan 30 23:12:36 np0005603435 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 30 23:12:36 np0005603435 systemd-modules-load[38725]: Inserted module 'br_netfilter'
Jan 30 23:12:36 np0005603435 kernel: Bridge firewalling registered
Jan 30 23:12:36 np0005603435 systemd[1]: Finished Load Kernel Modules.
Jan 30 23:12:37 np0005603435 python3.9[38880]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:12:37 np0005603435 python3.9[39003]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769832756.8637593-373-204131373850724/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:12:38 np0005603435 python3.9[39155]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:12:45 np0005603435 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Jan 30 23:12:45 np0005603435 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Jan 30 23:12:46 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:12:46 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:12:46 np0005603435 systemd[1]: Reloading.
Jan 30 23:12:46 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:12:46 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 23:12:49 np0005603435 python3.9[40883]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:12:50 np0005603435 python3.9[42024]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 30 23:12:50 np0005603435 python3.9[42874]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:12:51 np0005603435 python3.9[43278]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:12:51 np0005603435 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 30 23:12:52 np0005603435 systemd[1]: Starting Authorization Manager...
Jan 30 23:12:52 np0005603435 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 30 23:12:52 np0005603435 polkitd[43583]: Started polkitd version 0.117
Jan 30 23:12:52 np0005603435 systemd[1]: Started Authorization Manager.
Jan 30 23:12:53 np0005603435 python3.9[43753]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:12:53 np0005603435 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 30 23:12:53 np0005603435 systemd[1]: tuned.service: Deactivated successfully.
Jan 30 23:12:53 np0005603435 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 30 23:12:53 np0005603435 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 30 23:12:53 np0005603435 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 30 23:12:53 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:12:53 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:12:53 np0005603435 systemd[1]: man-db-cache-update.service: Consumed 3.954s CPU time.
Jan 30 23:12:53 np0005603435 systemd[1]: run-r17242df2d28a4d148addf8d640315dc9.service: Deactivated successfully.
Jan 30 23:12:54 np0005603435 python3.9[43916]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 30 23:12:56 np0005603435 python3.9[44069]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:12:56 np0005603435 systemd[1]: Reloading.
Jan 30 23:12:56 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:12:57 np0005603435 python3.9[44258]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:12:57 np0005603435 systemd[1]: Reloading.
Jan 30 23:12:57 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:12:57 np0005603435 python3.9[44447]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:12:58 np0005603435 python3.9[44600]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:12:58 np0005603435 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 30 23:12:59 np0005603435 python3.9[44753]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:13:00 np0005603435 python3.9[44915]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:13:01 np0005603435 python3.9[45068]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:13:01 np0005603435 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 30 23:13:01 np0005603435 systemd[1]: Stopped Apply Kernel Variables.
Jan 30 23:13:01 np0005603435 systemd[1]: Stopping Apply Kernel Variables...
Jan 30 23:13:01 np0005603435 systemd[1]: Starting Apply Kernel Variables...
Jan 30 23:13:01 np0005603435 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 30 23:13:01 np0005603435 systemd[1]: Finished Apply Kernel Variables.
Jan 30 23:13:02 np0005603435 systemd-logind[816]: Session 9 logged out. Waiting for processes to exit.
Jan 30 23:13:02 np0005603435 systemd[1]: session-9.scope: Deactivated successfully.
Jan 30 23:13:02 np0005603435 systemd[1]: session-9.scope: Consumed 2min 4.279s CPU time.
Jan 30 23:13:02 np0005603435 systemd-logind[816]: Removed session 9.
Jan 30 23:13:07 np0005603435 systemd-logind[816]: New session 10 of user zuul.
Jan 30 23:13:07 np0005603435 systemd[1]: Started Session 10 of User zuul.
Jan 30 23:13:08 np0005603435 python3.9[45251]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:13:09 np0005603435 python3.9[45407]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 30 23:13:10 np0005603435 python3.9[45560]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 30 23:13:11 np0005603435 python3.9[45718]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 30 23:13:12 np0005603435 python3.9[45878]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:13:13 np0005603435 python3.9[45962]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 30 23:13:16 np0005603435 python3.9[46126]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:13:26 np0005603435 kernel: SELinux:  Converting 2740 SID table entries...
Jan 30 23:13:26 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 23:13:26 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 23:13:26 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 23:13:26 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 23:13:26 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 23:13:26 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 23:13:26 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 23:13:26 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 30 23:13:26 np0005603435 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 30 23:13:27 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:13:27 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:13:28 np0005603435 systemd[1]: Reloading.
Jan 30 23:13:28 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:13:28 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:13:28 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 23:13:28 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:13:28 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:13:28 np0005603435 systemd[1]: run-r5373efe9e8bd44ad91baaafadd160285.service: Deactivated successfully.
Jan 30 23:13:29 np0005603435 python3.9[47224]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:13:29 np0005603435 systemd[1]: Reloading.
Jan 30 23:13:29 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:13:29 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:13:29 np0005603435 systemd[1]: Starting Open vSwitch Database Unit...
Jan 30 23:13:29 np0005603435 chown[47266]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 30 23:13:30 np0005603435 ovs-ctl[47271]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 30 23:13:30 np0005603435 ovs-ctl[47271]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 30 23:13:30 np0005603435 ovs-ctl[47271]: Starting ovsdb-server [  OK  ]
Jan 30 23:13:30 np0005603435 ovs-vsctl[47320]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 30 23:13:30 np0005603435 ovs-vsctl[47341]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"8e8c9464-4b9f-4423-88e0-e5889c10f4ca\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 30 23:13:30 np0005603435 ovs-ctl[47271]: Configuring Open vSwitch system IDs [  OK  ]
Jan 30 23:13:30 np0005603435 ovs-vsctl[47347]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 30 23:13:30 np0005603435 ovs-ctl[47271]: Enabling remote OVSDB managers [  OK  ]
Jan 30 23:13:30 np0005603435 systemd[1]: Started Open vSwitch Database Unit.
Jan 30 23:13:30 np0005603435 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 30 23:13:30 np0005603435 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 30 23:13:30 np0005603435 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 30 23:13:30 np0005603435 kernel: openvswitch: Open vSwitch switching datapath
Jan 30 23:13:30 np0005603435 ovs-ctl[47393]: Inserting openvswitch module [  OK  ]
Jan 30 23:13:30 np0005603435 ovs-ctl[47362]: Starting ovs-vswitchd [  OK  ]
Jan 30 23:13:30 np0005603435 ovs-vsctl[47411]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 30 23:13:30 np0005603435 ovs-ctl[47362]: Enabling remote OVSDB managers [  OK  ]
Jan 30 23:13:30 np0005603435 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 30 23:13:30 np0005603435 systemd[1]: Starting Open vSwitch...
Jan 30 23:13:30 np0005603435 systemd[1]: Finished Open vSwitch.
Jan 30 23:13:31 np0005603435 python3.9[47562]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:13:32 np0005603435 python3.9[47714]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 30 23:13:33 np0005603435 kernel: SELinux:  Converting 2754 SID table entries...
Jan 30 23:13:33 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 23:13:33 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 23:13:33 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 23:13:33 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 23:13:33 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 23:13:33 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 23:13:33 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 23:13:34 np0005603435 python3.9[47869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:13:35 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 30 23:13:35 np0005603435 python3.9[48027]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:13:37 np0005603435 python3.9[48180]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:13:38 np0005603435 python3.9[48467]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 30 23:13:39 np0005603435 python3.9[48617]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:13:40 np0005603435 python3.9[48771]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:13:42 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:13:42 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:13:42 np0005603435 systemd[1]: Reloading.
Jan 30 23:13:42 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:13:42 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:13:42 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 23:13:43 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:13:43 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:13:43 np0005603435 systemd[1]: run-rcf7cc1b1081842b3a45d4e804245e9ed.service: Deactivated successfully.
Jan 30 23:13:43 np0005603435 python3.9[49088]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:13:43 np0005603435 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 30 23:13:43 np0005603435 systemd[1]: Stopped Network Manager Wait Online.
Jan 30 23:13:43 np0005603435 systemd[1]: Stopping Network Manager Wait Online...
Jan 30 23:13:43 np0005603435 systemd[1]: Stopping Network Manager...
Jan 30 23:13:43 np0005603435 NetworkManager[7198]: <info>  [1769832823.9444] caught SIGTERM, shutting down normally.
Jan 30 23:13:43 np0005603435 NetworkManager[7198]: <info>  [1769832823.9459] dhcp4 (eth0): canceled DHCP transaction
Jan 30 23:13:43 np0005603435 NetworkManager[7198]: <info>  [1769832823.9460] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 30 23:13:43 np0005603435 NetworkManager[7198]: <info>  [1769832823.9460] dhcp4 (eth0): state changed no lease
Jan 30 23:13:43 np0005603435 NetworkManager[7198]: <info>  [1769832823.9462] manager: NetworkManager state is now CONNECTED_SITE
Jan 30 23:13:43 np0005603435 NetworkManager[7198]: <info>  [1769832823.9522] exiting (success)
Jan 30 23:13:43 np0005603435 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 30 23:13:43 np0005603435 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 30 23:13:43 np0005603435 systemd[1]: Stopped Network Manager.
Jan 30 23:13:43 np0005603435 systemd[1]: NetworkManager.service: Consumed 12.577s CPU time, 4.1M memory peak, read 0B from disk, written 11.0K to disk.
Jan 30 23:13:43 np0005603435 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 30 23:13:43 np0005603435 systemd[1]: Starting Network Manager...
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.0036] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:c3572006-004a-46ef-8549-18148190fe4e)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.0039] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.0086] manager[0x55f84ee6b000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 30 23:13:44 np0005603435 systemd[1]: Starting Hostname Service...
Jan 30 23:13:44 np0005603435 systemd[1]: Started Hostname Service.
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1033] hostname: hostname: using hostnamed
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1036] hostname: static hostname changed from (none) to "compute-0"
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1044] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1049] manager[0x55f84ee6b000]: rfkill: Wi-Fi hardware radio set enabled
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1050] manager[0x55f84ee6b000]: rfkill: WWAN hardware radio set enabled
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1083] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1097] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1098] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1099] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1100] manager: Networking is enabled by state file
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1103] settings: Loaded settings plugin: keyfile (internal)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1108] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1146] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1159] dhcp: init: Using DHCP client 'internal'
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1164] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1172] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1179] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1190] device (lo): Activation: starting connection 'lo' (8771c680-8833-490d-9239-e3b4dbbb566f)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1199] device (eth0): carrier: link connected
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1206] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1216] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1217] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1228] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1239] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1247] device (eth1): carrier: link connected
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1253] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1260] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (037fc6a4-a42b-56fb-be9d-3251f9098a4b) (indicated)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1261] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1271] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1283] device (eth1): Activation: starting connection 'ci-private-network' (037fc6a4-a42b-56fb-be9d-3251f9098a4b)
Jan 30 23:13:44 np0005603435 systemd[1]: Started Network Manager.
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1299] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1312] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1318] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1322] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1326] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1331] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1336] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1341] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1347] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1362] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1369] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1384] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1410] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 systemd[1]: Starting Network Manager Wait Online...
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1430] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1435] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1441] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1453] device (lo): Activation: successful, device activated.
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1472] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1583] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1596] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1599] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1604] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1608] device (eth1): Activation: successful, device activated.
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1673] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1675] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1680] manager: NetworkManager state is now CONNECTED_SITE
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1684] device (eth0): Activation: successful, device activated.
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1692] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 30 23:13:44 np0005603435 NetworkManager[49097]: <info>  [1769832824.1735] manager: startup complete
Jan 30 23:13:44 np0005603435 systemd[1]: Finished Network Manager Wait Online.
Jan 30 23:13:44 np0005603435 python3.9[49314]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:13:49 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:13:49 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:13:49 np0005603435 systemd[1]: Reloading.
Jan 30 23:13:49 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:13:49 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:13:49 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 23:13:50 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:13:50 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:13:50 np0005603435 systemd[1]: run-ra0e9fd80939d4e81b9cefc9d6889b7e4.service: Deactivated successfully.
Jan 30 23:13:51 np0005603435 python3.9[49775]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:13:52 np0005603435 python3.9[49927]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:13:52 np0005603435 python3.9[50081]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:13:53 np0005603435 python3.9[50233]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:13:54 np0005603435 python3.9[50385]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:13:54 np0005603435 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 30 23:13:54 np0005603435 python3.9[50537]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:13:55 np0005603435 python3.9[50689]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:13:56 np0005603435 python3.9[50812]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832834.918184-224-279564051163391/.source _original_basename=.6oa31iaq follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:13:56 np0005603435 python3.9[50964]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:13:57 np0005603435 python3.9[51116]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 30 23:13:58 np0005603435 python3.9[51268]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:14:00 np0005603435 python3.9[51695]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 30 23:14:01 np0005603435 ansible-async_wrapper.py[51870]: Invoked with j766363219418 300 /home/zuul/.ansible/tmp/ansible-tmp-1769832840.7352333-290-202733354879594/AnsiballZ_edpm_os_net_config.py _
Jan 30 23:14:01 np0005603435 ansible-async_wrapper.py[51873]: Starting module and watcher
Jan 30 23:14:01 np0005603435 ansible-async_wrapper.py[51873]: Start watching 51874 (300)
Jan 30 23:14:01 np0005603435 ansible-async_wrapper.py[51874]: Start module (51874)
Jan 30 23:14:01 np0005603435 ansible-async_wrapper.py[51870]: Return async_wrapper task started.
Jan 30 23:14:01 np0005603435 python3.9[51875]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 30 23:14:02 np0005603435 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 30 23:14:02 np0005603435 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 30 23:14:02 np0005603435 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 30 23:14:02 np0005603435 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 30 23:14:02 np0005603435 kernel: cfg80211: failed to load regulatory.db
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7151] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7175] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7728] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7730] audit: op="connection-add" uuid="080c7285-9831-4685-8b78-5c99c7f118ff" name="br-ex-br" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7746] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7747] audit: op="connection-add" uuid="a780cd63-2637-4a84-b114-afe4fc78f3fe" name="br-ex-port" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7759] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7760] audit: op="connection-add" uuid="121cad16-16b8-4925-888a-034bd781583f" name="eth1-port" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7771] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7773] audit: op="connection-add" uuid="434a8739-ddf8-491e-8727-6f2931fc3da9" name="vlan20-port" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7784] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7785] audit: op="connection-add" uuid="fb15616a-1a3b-43cc-9778-5976d2b0c858" name="vlan21-port" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7796] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7797] audit: op="connection-add" uuid="46b26581-d5d0-4a1a-829a-84af3921b624" name="vlan22-port" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7812] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7813] audit: op="connection-add" uuid="4ef0ad1a-7ad8-4e34-a663-3e9c32d7056d" name="vlan23-port" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7832] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7849] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.7850] audit: op="connection-add" uuid="e56ae812-bb5c-4864-8ab8-c548911bd947" name="br-ex-if" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8059] audit: op="connection-update" uuid="037fc6a4-a42b-56fb-be9d-3251f9098a4b" name="ci-private-network" args="ovs-external-ids.data,ipv6.dns,ipv6.routing-rules,ipv6.method,ipv6.routes,ipv6.addr-gen-mode,ipv6.addresses,connection.controller,connection.timestamp,connection.port-type,connection.master,connection.slave-type,ipv4.dns,ipv4.routing-rules,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.addresses,ovs-interface.type" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8077] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8078] audit: op="connection-add" uuid="dd7e9d1a-d770-4baa-82ba-de0eaffd36d2" name="vlan20-if" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8098] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8099] audit: op="connection-add" uuid="af83a553-da1d-4881-b835-9b0d5c5ba18f" name="vlan21-if" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8116] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8118] audit: op="connection-add" uuid="bc5c245b-5327-4535-ae16-2b6d30b452aa" name="vlan22-if" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8134] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8136] audit: op="connection-add" uuid="e6abfa74-f68a-4442-92bd-554a59c198fe" name="vlan23-if" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8149] audit: op="connection-delete" uuid="38dcc20a-970f-3b4f-84d1-230174caf167" name="Wired connection 1" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8161] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8164] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8170] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8174] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (080c7285-9831-4685-8b78-5c99c7f118ff)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8174] audit: op="connection-activate" uuid="080c7285-9831-4685-8b78-5c99c7f118ff" name="br-ex-br" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8176] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8177] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8181] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8185] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a780cd63-2637-4a84-b114-afe4fc78f3fe)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8187] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8188] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8192] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8195] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (121cad16-16b8-4925-888a-034bd781583f)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8197] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8198] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8202] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8206] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (434a8739-ddf8-491e-8727-6f2931fc3da9)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8208] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8208] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8213] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8217] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (fb15616a-1a3b-43cc-9778-5976d2b0c858)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8218] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8219] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8223] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8226] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (46b26581-d5d0-4a1a-829a-84af3921b624)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8228] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8228] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8232] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8236] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4ef0ad1a-7ad8-4e34-a663-3e9c32d7056d)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8236] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8238] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8240] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8245] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8245] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8248] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8251] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e56ae812-bb5c-4864-8ab8-c548911bd947)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8252] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8255] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8256] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8257] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8258] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8267] device (eth1): disconnecting for new activation request.
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8268] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8270] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8272] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8273] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8275] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8276] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8279] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8283] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (dd7e9d1a-d770-4baa-82ba-de0eaffd36d2)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8283] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8286] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8288] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8289] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8291] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8293] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8295] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8299] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (af83a553-da1d-4881-b835-9b0d5c5ba18f)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8300] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8303] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8304] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8306] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8309] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8309] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8313] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8317] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (bc5c245b-5327-4535-ae16-2b6d30b452aa)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8318] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8321] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8323] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8324] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8326] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <warn>  [1769832843.8327] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8329] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8334] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (e6abfa74-f68a-4442-92bd-554a59c198fe)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8334] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8337] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8338] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8339] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8340] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8351] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51876 uid=0 result="success"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8353] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8355] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8357] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8362] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8365] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8369] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8372] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8374] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8378] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8381] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 kernel: ovs-system: entered promiscuous mode
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8384] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8386] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8389] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8391] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8393] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8394] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8398] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8403] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8406] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8408] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 kernel: Timeout policy base is empty
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8413] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8416] dhcp4 (eth0): canceled DHCP transaction
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8416] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8416] dhcp4 (eth0): state changed no lease
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8417] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 30 23:14:03 np0005603435 systemd-udevd[51881]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8426] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8434] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51876 uid=0 result="fail" reason="Device is not activated"
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8437] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 30 23:14:03 np0005603435 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 30 23:14:03 np0005603435 kernel: br-ex: entered promiscuous mode
Jan 30 23:14:03 np0005603435 kernel: vlan20: entered promiscuous mode
Jan 30 23:14:03 np0005603435 systemd-udevd[51882]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:14:03 np0005603435 kernel: vlan21: entered promiscuous mode
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8969] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8982] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.8996] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.9002] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 30 23:14:03 np0005603435 NetworkManager[49097]: <info>  [1769832843.9011] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 30 23:14:03 np0005603435 kernel: vlan22: entered promiscuous mode
Jan 30 23:14:03 np0005603435 kernel: vlan23: entered promiscuous mode
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0095] device (eth1): disconnecting for new activation request.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0096] audit: op="connection-activate" uuid="037fc6a4-a42b-56fb-be9d-3251f9098a4b" name="ci-private-network" pid=51876 uid=0 result="success"
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0096] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0220] device (eth1): Activation: starting connection 'ci-private-network' (037fc6a4-a42b-56fb-be9d-3251f9098a4b)
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0225] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0229] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0230] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0232] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0234] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0236] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0238] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0262] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0274] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0282] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0284] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0289] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0299] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0307] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0315] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0319] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0322] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0327] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0335] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0341] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0346] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0351] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0356] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0362] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0366] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0372] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0380] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0380] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51876 uid=0 result="success"
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0396] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0407] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0434] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0437] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0468] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0476] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0490] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0509] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0517] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0524] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0527] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0536] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0537] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0538] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0541] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0553] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0565] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0574] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0583] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0589] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0602] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0609] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0615] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0623] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0635] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 30 23:14:04 np0005603435 NetworkManager[49097]: <info>  [1769832844.0642] device (eth1): Activation: successful, device activated.
Jan 30 23:14:05 np0005603435 NetworkManager[49097]: <info>  [1769832845.2484] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51876 uid=0 result="success"
Jan 30 23:14:05 np0005603435 python3.9[52233]: ansible-ansible.legacy.async_status Invoked with jid=j766363219418.51870 mode=status _async_dir=/root/.ansible_async
Jan 30 23:14:05 np0005603435 NetworkManager[49097]: <info>  [1769832845.4913] checkpoint[0x55f84ee40950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 30 23:14:05 np0005603435 NetworkManager[49097]: <info>  [1769832845.4916] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51876 uid=0 result="success"
Jan 30 23:14:06 np0005603435 NetworkManager[49097]: <info>  [1769832846.0925] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51876 uid=0 result="success"
Jan 30 23:14:06 np0005603435 NetworkManager[49097]: <info>  [1769832846.0946] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51876 uid=0 result="success"
Jan 30 23:14:06 np0005603435 NetworkManager[49097]: <info>  [1769832846.4817] audit: op="networking-control" arg="global-dns-configuration" pid=51876 uid=0 result="success"
Jan 30 23:14:06 np0005603435 NetworkManager[49097]: <info>  [1769832846.4863] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 30 23:14:06 np0005603435 NetworkManager[49097]: <info>  [1769832846.4911] audit: op="networking-control" arg="global-dns-configuration" pid=51876 uid=0 result="success"
Jan 30 23:14:06 np0005603435 NetworkManager[49097]: <info>  [1769832846.4950] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51876 uid=0 result="success"
Jan 30 23:14:06 np0005603435 ansible-async_wrapper.py[51873]: 51874 still running (300)
Jan 30 23:14:06 np0005603435 NetworkManager[49097]: <info>  [1769832846.7061] checkpoint[0x55f84ee40a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 30 23:14:06 np0005603435 NetworkManager[49097]: <info>  [1769832846.7071] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51876 uid=0 result="success"
Jan 30 23:14:06 np0005603435 ansible-async_wrapper.py[51874]: Module complete (51874)
Jan 30 23:14:08 np0005603435 python3.9[52339]: ansible-ansible.legacy.async_status Invoked with jid=j766363219418.51870 mode=status _async_dir=/root/.ansible_async
Jan 30 23:14:09 np0005603435 python3.9[52439]: ansible-ansible.legacy.async_status Invoked with jid=j766363219418.51870 mode=cleanup _async_dir=/root/.ansible_async
Jan 30 23:14:10 np0005603435 python3.9[52591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:14:10 np0005603435 python3.9[52714]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832849.780288-317-184196661056965/.source.returncode _original_basename=.wxx8a_u_ follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:14:11 np0005603435 python3.9[52866]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:14:11 np0005603435 ansible-async_wrapper.py[51873]: Done in kid B.
Jan 30 23:14:12 np0005603435 python3.9[52989]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832851.0620952-333-123938358300948/.source.cfg _original_basename=.72q2zcy6 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:14:12 np0005603435 python3.9[53142]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:14:13 np0005603435 systemd[1]: Reloading Network Manager...
Jan 30 23:14:13 np0005603435 NetworkManager[49097]: <info>  [1769832853.0527] audit: op="reload" arg="0" pid=53146 uid=0 result="success"
Jan 30 23:14:13 np0005603435 NetworkManager[49097]: <info>  [1769832853.0540] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 30 23:14:13 np0005603435 systemd[1]: Reloaded Network Manager.
Jan 30 23:14:13 np0005603435 systemd[1]: session-10.scope: Deactivated successfully.
Jan 30 23:14:13 np0005603435 systemd[1]: session-10.scope: Consumed 46.334s CPU time.
Jan 30 23:14:13 np0005603435 systemd-logind[816]: Session 10 logged out. Waiting for processes to exit.
Jan 30 23:14:13 np0005603435 systemd-logind[816]: Removed session 10.
Jan 30 23:14:14 np0005603435 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 30 23:14:19 np0005603435 systemd-logind[816]: New session 11 of user zuul.
Jan 30 23:14:19 np0005603435 systemd[1]: Started Session 11 of User zuul.
Jan 30 23:14:20 np0005603435 python3.9[53332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:14:21 np0005603435 python3.9[53486]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:14:22 np0005603435 python3.9[53680]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:14:22 np0005603435 systemd[1]: session-11.scope: Deactivated successfully.
Jan 30 23:14:22 np0005603435 systemd[1]: session-11.scope: Consumed 2.130s CPU time.
Jan 30 23:14:22 np0005603435 systemd-logind[816]: Session 11 logged out. Waiting for processes to exit.
Jan 30 23:14:22 np0005603435 systemd-logind[816]: Removed session 11.
Jan 30 23:14:23 np0005603435 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 30 23:14:28 np0005603435 systemd-logind[816]: New session 12 of user zuul.
Jan 30 23:14:28 np0005603435 systemd[1]: Started Session 12 of User zuul.
Jan 30 23:14:29 np0005603435 python3.9[53862]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:14:30 np0005603435 python3.9[54016]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:14:31 np0005603435 python3.9[54172]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:14:32 np0005603435 python3.9[54257]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:14:34 np0005603435 python3.9[54410]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:14:35 np0005603435 python3.9[54606]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:14:35 np0005603435 python3.9[54758]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:14:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-compat1955905876-merged.mount: Deactivated successfully.
Jan 30 23:14:36 np0005603435 podman[54759]: 2026-01-31 04:14:36.008096547 +0000 UTC m=+0.053528911 system refresh
Jan 30 23:14:36 np0005603435 python3.9[54922]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:14:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:14:37 np0005603435 python3.9[55045]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832876.205244-74-263145826280647/.source.json follow=False _original_basename=podman_network_config.j2 checksum=dddd8ea8184e3e034066516739798fb7a43ee183 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:14:38 np0005603435 python3.9[55197]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:14:38 np0005603435 python3.9[55320]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769832877.8214242-89-266216916423156/.source.conf follow=False _original_basename=registries.conf.j2 checksum=ead0efa6afadd34d101c04e43c51eff468b95c8b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:14:39 np0005603435 python3.9[55472]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:14:39 np0005603435 python3.9[55624]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:14:40 np0005603435 python3.9[55776]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:14:41 np0005603435 python3.9[55928]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:14:41 np0005603435 python3.9[56080]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:14:43 np0005603435 python3.9[56233]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:14:44 np0005603435 python3.9[56387]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:14:45 np0005603435 python3.9[56539]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:14:45 np0005603435 python3.9[56691]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:14:46 np0005603435 python3.9[56844]: ansible-service_facts Invoked
Jan 30 23:14:46 np0005603435 network[56861]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:14:46 np0005603435 network[56862]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:14:46 np0005603435 network[56863]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:14:51 np0005603435 python3.9[57315]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:14:54 np0005603435 python3.9[57468]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 30 23:14:55 np0005603435 python3.9[57620]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:14:55 np0005603435 python3.9[57745]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832894.7509592-233-137296284290079/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:14:56 np0005603435 python3.9[57899]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:14:57 np0005603435 python3.9[58024]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832895.9857347-248-184410208146623/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:14:58 np0005603435 python3.9[58178]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:14:59 np0005603435 python3.9[58332]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:15:00 np0005603435 python3.9[58416]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:15:01 np0005603435 python3.9[58570]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:15:01 np0005603435 python3.9[58654]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:15:01 np0005603435 chronyd[794]: chronyd exiting
Jan 30 23:15:01 np0005603435 systemd[1]: Stopping NTP client/server...
Jan 30 23:15:01 np0005603435 systemd[1]: chronyd.service: Deactivated successfully.
Jan 30 23:15:01 np0005603435 systemd[1]: Stopped NTP client/server.
Jan 30 23:15:01 np0005603435 systemd[1]: Starting NTP client/server...
Jan 30 23:15:01 np0005603435 chronyd[58662]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 30 23:15:01 np0005603435 chronyd[58662]: Frequency -24.480 +/- 0.084 ppm read from /var/lib/chrony/drift
Jan 30 23:15:01 np0005603435 chronyd[58662]: Loaded seccomp filter (level 2)
Jan 30 23:15:01 np0005603435 systemd[1]: Started NTP client/server.
Jan 30 23:15:02 np0005603435 systemd[1]: session-12.scope: Deactivated successfully.
Jan 30 23:15:02 np0005603435 systemd[1]: session-12.scope: Consumed 22.950s CPU time.
Jan 30 23:15:02 np0005603435 systemd-logind[816]: Session 12 logged out. Waiting for processes to exit.
Jan 30 23:15:02 np0005603435 systemd-logind[816]: Removed session 12.
Jan 30 23:15:07 np0005603435 systemd-logind[816]: New session 13 of user zuul.
Jan 30 23:15:07 np0005603435 systemd[1]: Started Session 13 of User zuul.
Jan 30 23:15:08 np0005603435 python3.9[58843]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:09 np0005603435 python3.9[58995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:09 np0005603435 python3.9[59118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832908.7165349-29-51885378974732/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:10 np0005603435 systemd[1]: session-13.scope: Deactivated successfully.
Jan 30 23:15:10 np0005603435 systemd[1]: session-13.scope: Consumed 1.560s CPU time.
Jan 30 23:15:10 np0005603435 systemd-logind[816]: Session 13 logged out. Waiting for processes to exit.
Jan 30 23:15:10 np0005603435 systemd-logind[816]: Removed session 13.
Jan 30 23:15:16 np0005603435 systemd-logind[816]: New session 14 of user zuul.
Jan 30 23:15:16 np0005603435 systemd[1]: Started Session 14 of User zuul.
Jan 30 23:15:17 np0005603435 python3.9[59296]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:15:19 np0005603435 python3.9[59452]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:19 np0005603435 python3.9[59627]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:20 np0005603435 python3.9[59750]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769832919.2198515-36-9428796752112/.source.json _original_basename=.rinbc0l5 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:21 np0005603435 python3.9[59902]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:22 np0005603435 python3.9[60025]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832921.01327-59-8144405057104/.source _original_basename=.d1hccn1r follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:22 np0005603435 python3.9[60177]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:15:23 np0005603435 python3.9[60329]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:23 np0005603435 python3.9[60452]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769832922.863202-83-117026136363576/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:15:24 np0005603435 python3.9[60604]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:25 np0005603435 python3.9[60727]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769832924.0136385-83-137800113342441/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:15:25 np0005603435 python3.9[60879]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:26 np0005603435 python3.9[61031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:26 np0005603435 python3.9[61154]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832925.7368402-120-255574667708055/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:27 np0005603435 python3.9[61306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:27 np0005603435 python3.9[61429]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832926.8400128-135-90431841277257/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:28 np0005603435 python3.9[61581]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:15:28 np0005603435 systemd[1]: Reloading.
Jan 30 23:15:28 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:15:28 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:15:29 np0005603435 systemd[1]: Reloading.
Jan 30 23:15:29 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:15:29 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:15:29 np0005603435 systemd[1]: Starting EDPM Container Shutdown...
Jan 30 23:15:29 np0005603435 systemd[1]: Finished EDPM Container Shutdown.
Jan 30 23:15:29 np0005603435 python3.9[61808]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:30 np0005603435 python3.9[61931]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832929.3348393-158-26808474239461/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:30 np0005603435 python3.9[62083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:31 np0005603435 python3.9[62206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832930.2714462-173-152572165940845/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:31 np0005603435 python3.9[62358]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:15:31 np0005603435 systemd[1]: Reloading.
Jan 30 23:15:31 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:15:31 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:15:32 np0005603435 systemd[1]: Reloading.
Jan 30 23:15:32 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:15:32 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:15:32 np0005603435 systemd[1]: Starting Create netns directory...
Jan 30 23:15:32 np0005603435 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 30 23:15:32 np0005603435 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 30 23:15:32 np0005603435 systemd[1]: Finished Create netns directory.
Jan 30 23:15:33 np0005603435 python3.9[62584]: ansible-ansible.builtin.service_facts Invoked
Jan 30 23:15:33 np0005603435 network[62601]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:15:33 np0005603435 network[62602]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:15:33 np0005603435 network[62603]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:15:36 np0005603435 python3.9[62865]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:15:36 np0005603435 systemd[1]: Reloading.
Jan 30 23:15:36 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:15:36 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:15:36 np0005603435 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 30 23:15:36 np0005603435 iptables.init[62904]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 30 23:15:36 np0005603435 iptables.init[62904]: iptables: Flushing firewall rules: [  OK  ]
Jan 30 23:15:36 np0005603435 systemd[1]: iptables.service: Deactivated successfully.
Jan 30 23:15:36 np0005603435 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 30 23:15:37 np0005603435 python3.9[63101]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:15:38 np0005603435 python3.9[63255]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:15:38 np0005603435 systemd[1]: Reloading.
Jan 30 23:15:38 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:15:38 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:15:38 np0005603435 systemd[1]: Starting Netfilter Tables...
Jan 30 23:15:38 np0005603435 systemd[1]: Finished Netfilter Tables.
Jan 30 23:15:39 np0005603435 python3.9[63448]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:15:40 np0005603435 python3.9[63601]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:40 np0005603435 python3.9[63726]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832939.9124084-242-187572402107257/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:41 np0005603435 python3.9[63879]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:15:41 np0005603435 systemd[1]: Reloading OpenSSH server daemon...
Jan 30 23:15:41 np0005603435 systemd[1]: Reloaded OpenSSH server daemon.
Jan 30 23:15:42 np0005603435 python3.9[64035]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:42 np0005603435 python3.9[64187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:43 np0005603435 python3.9[64310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832942.3603728-273-92735027007007/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:44 np0005603435 python3.9[64464]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 30 23:15:44 np0005603435 systemd[1]: Starting Time & Date Service...
Jan 30 23:15:44 np0005603435 systemd[1]: Started Time & Date Service.
Jan 30 23:15:45 np0005603435 python3.9[64620]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:45 np0005603435 python3.9[64772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:46 np0005603435 python3.9[64895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832945.4313369-308-72684456949172/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:47 np0005603435 python3.9[65047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:47 np0005603435 python3.9[65170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769832946.5219405-323-121435723796756/.source.yaml _original_basename=.r6v7_i6n follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:48 np0005603435 python3.9[65322]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:48 np0005603435 python3.9[65445]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832947.722147-338-185058853395924/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:49 np0005603435 python3.9[65597]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:15:49 np0005603435 python3.9[65750]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:15:50 np0005603435 python3[65903]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 30 23:15:51 np0005603435 python3.9[66055]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:51 np0005603435 python3.9[66178]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832950.71671-377-248801902749544/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:52 np0005603435 python3.9[66330]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:52 np0005603435 python3.9[66453]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832951.9119878-392-37942472326198/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:53 np0005603435 python3.9[66605]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:54 np0005603435 python3.9[66728]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832953.1519842-407-152272641943317/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:54 np0005603435 python3.9[66880]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:55 np0005603435 python3.9[67003]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832954.403109-422-174807964329910/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:56 np0005603435 python3.9[67155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:15:56 np0005603435 python3.9[67278]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769832955.6088681-437-6205336703333/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:57 np0005603435 python3.9[67430]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:57 np0005603435 python3.9[67582]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:15:58 np0005603435 python3.9[67741]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:15:59 np0005603435 python3.9[67894]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:16:00 np0005603435 python3.9[68046]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:16:00 np0005603435 python3.9[68198]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 30 23:16:01 np0005603435 python3.9[68351]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 30 23:16:01 np0005603435 systemd[1]: session-14.scope: Deactivated successfully.
Jan 30 23:16:01 np0005603435 systemd[1]: session-14.scope: Consumed 30.817s CPU time.
Jan 30 23:16:01 np0005603435 systemd-logind[816]: Session 14 logged out. Waiting for processes to exit.
Jan 30 23:16:01 np0005603435 systemd-logind[816]: Removed session 14.
Jan 30 23:16:07 np0005603435 systemd-logind[816]: New session 15 of user zuul.
Jan 30 23:16:07 np0005603435 systemd[1]: Started Session 15 of User zuul.
Jan 30 23:16:08 np0005603435 python3.9[68532]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 30 23:16:09 np0005603435 python3.9[68684]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:16:10 np0005603435 python3.9[68836]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:16:11 np0005603435 python3.9[68988]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGQxtpB4QkMB44gxnODjQJf9hqMcT11PupnYsKJqkL8KIwvW19mR2t3GusmAh3ls8s+Uvrf90eL7UCOkPryyfFZVoca6HEM751NZGlOPXAbYwd9N7xdXlNQcNKL6/NhkELWoQEY6FbeJtIGFuztlL8BBujH35ykR+nU2f8LJ6n4H9iFBiUKmR3cL27BiShT4M5XoWXWk6WQUKtfLJyHDlO22e3wM2s46EdwlHCjO9G31+ZC5Syyo+J9j5kKEF/Ni6bf85LP9LNXQA/fF0L4pParenf2GP5UbqidnkBelmmZTKPHmP/7gqCiVeDUd9TSxDHaRzCBlpZMVF5Q+Ymd7yJm0762FpwIxJmXKLNn6d/feS78rtrJ6ddNsUiNL81zuzG+vG+2rXKBk1iBhgqH3emnKhu6K3zNjHI37M45ZRECiP3d+MScncE7gVh4yH/DuENZqBQbnyg5659pVzK7cmo2PzlorptrAUOTerzpH/1GouAvw2KU7VUwYZ1eM17k+U=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMpBzDyT3QqEMyHu/pbcKb4cYXF9Jqh9RqwzOHUt0qjr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNGxfOuKWJoWAkU0LFOcNkfeFOZ36yy4OzL9FzbJ3Q0W0SWhgpdh4a7FHRJ8jpW4ccTddKCeMEgfFAyomIrJU4Q=#012 create=True mode=0644 path=/tmp/ansible.ym1ggnpx state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:16:11 np0005603435 python3.9[69140]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ym1ggnpx' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:16:12 np0005603435 python3.9[69294]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ym1ggnpx state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:16:12 np0005603435 systemd-logind[816]: Session 15 logged out. Waiting for processes to exit.
Jan 30 23:16:12 np0005603435 systemd[1]: session-15.scope: Deactivated successfully.
Jan 30 23:16:12 np0005603435 systemd[1]: session-15.scope: Consumed 2.995s CPU time.
Jan 30 23:16:12 np0005603435 systemd-logind[816]: Removed session 15.
Jan 30 23:16:14 np0005603435 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 30 23:16:18 np0005603435 systemd-logind[816]: New session 16 of user zuul.
Jan 30 23:16:18 np0005603435 systemd[1]: Started Session 16 of User zuul.
Jan 30 23:16:19 np0005603435 python3.9[69474]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:16:20 np0005603435 python3.9[69630]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 30 23:16:21 np0005603435 python3.9[69784]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:16:22 np0005603435 python3.9[69937]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:16:23 np0005603435 python3.9[70090]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:16:23 np0005603435 python3.9[70244]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:16:24 np0005603435 python3.9[70400]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:16:25 np0005603435 systemd[1]: session-16.scope: Deactivated successfully.
Jan 30 23:16:25 np0005603435 systemd[1]: session-16.scope: Consumed 4.086s CPU time.
Jan 30 23:16:25 np0005603435 systemd-logind[816]: Session 16 logged out. Waiting for processes to exit.
Jan 30 23:16:25 np0005603435 systemd-logind[816]: Removed session 16.
Jan 30 23:16:30 np0005603435 systemd-logind[816]: New session 17 of user zuul.
Jan 30 23:16:30 np0005603435 systemd[1]: Started Session 17 of User zuul.
Jan 30 23:16:31 np0005603435 python3.9[70578]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:16:32 np0005603435 python3.9[70734]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:16:33 np0005603435 python3.9[70818]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 30 23:16:35 np0005603435 python3.9[70969]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:16:36 np0005603435 python3.9[71120]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 30 23:16:37 np0005603435 python3.9[71270]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:16:37 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:16:38 np0005603435 python3.9[71421]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:16:38 np0005603435 systemd[1]: session-17.scope: Deactivated successfully.
Jan 30 23:16:38 np0005603435 systemd[1]: session-17.scope: Consumed 5.549s CPU time.
Jan 30 23:16:38 np0005603435 systemd-logind[816]: Session 17 logged out. Waiting for processes to exit.
Jan 30 23:16:38 np0005603435 systemd-logind[816]: Removed session 17.
Jan 30 23:16:46 np0005603435 systemd-logind[816]: New session 18 of user zuul.
Jan 30 23:16:46 np0005603435 systemd[1]: Started Session 18 of User zuul.
Jan 30 23:16:51 np0005603435 python3[72187]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:16:53 np0005603435 python3[72282]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 30 23:16:54 np0005603435 python3[72309]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:16:54 np0005603435 python3[72335]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:16:54 np0005603435 kernel: loop: module loaded
Jan 30 23:16:54 np0005603435 kernel: loop3: detected capacity change from 0 to 41943040
Jan 30 23:16:55 np0005603435 python3[72370]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:16:55 np0005603435 lvm[72373]: PV /dev/loop3 not used.
Jan 30 23:16:55 np0005603435 lvm[72375]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:16:55 np0005603435 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 30 23:16:55 np0005603435 lvm[72378]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 30 23:16:55 np0005603435 lvm[72385]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:16:55 np0005603435 lvm[72385]: VG ceph_vg0 finished
Jan 30 23:16:55 np0005603435 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 30 23:16:56 np0005603435 python3[72463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:16:56 np0005603435 python3[72536]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833015.7113397-36439-151950831189024/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:16:56 np0005603435 python3[72586]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:16:57 np0005603435 systemd[1]: Reloading.
Jan 30 23:16:57 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:16:57 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:16:57 np0005603435 systemd[1]: Starting Ceph OSD losetup...
Jan 30 23:16:57 np0005603435 bash[72627]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Jan 30 23:16:57 np0005603435 systemd[1]: Finished Ceph OSD losetup.
Jan 30 23:16:57 np0005603435 lvm[72628]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:16:57 np0005603435 lvm[72628]: VG ceph_vg0 finished
Jan 30 23:16:57 np0005603435 python3[72654]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 30 23:16:59 np0005603435 python3[72681]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:16:59 np0005603435 python3[72707]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:16:59 np0005603435 kernel: loop4: detected capacity change from 0 to 41943040
Jan 30 23:16:59 np0005603435 python3[72739]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:17:00 np0005603435 lvm[72742]: PV /dev/loop4 not used.
Jan 30 23:17:00 np0005603435 lvm[72744]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:17:00 np0005603435 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 30 23:17:00 np0005603435 lvm[72754]:  1 logical volume(s) in volume group "ceph_vg1" now active
Jan 30 23:17:00 np0005603435 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 30 23:17:00 np0005603435 python3[72832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:17:00 np0005603435 python3[72905]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833020.3130805-36466-219604081126834/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:17:01 np0005603435 python3[72955]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:17:01 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:01 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:01 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:01 np0005603435 systemd[1]: Starting Ceph OSD losetup...
Jan 30 23:17:01 np0005603435 bash[72994]: /dev/loop4: [64513]:4329562 (/var/lib/ceph-osd-1.img)
Jan 30 23:17:01 np0005603435 systemd[1]: Finished Ceph OSD losetup.
Jan 30 23:17:01 np0005603435 lvm[72995]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:17:01 np0005603435 lvm[72995]: VG ceph_vg1 finished
Jan 30 23:17:02 np0005603435 python3[73021]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 30 23:17:03 np0005603435 python3[73048]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:17:03 np0005603435 python3[73074]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:17:03 np0005603435 kernel: loop5: detected capacity change from 0 to 41943040
Jan 30 23:17:04 np0005603435 python3[73106]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:17:04 np0005603435 lvm[73109]: PV /dev/loop5 not used.
Jan 30 23:17:04 np0005603435 lvm[73111]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:17:04 np0005603435 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 30 23:17:04 np0005603435 lvm[73115]:  1 logical volume(s) in volume group "ceph_vg2" now active
Jan 30 23:17:04 np0005603435 lvm[73121]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:17:04 np0005603435 lvm[73121]: VG ceph_vg2 finished
Jan 30 23:17:04 np0005603435 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 30 23:17:04 np0005603435 python3[73199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:17:05 np0005603435 python3[73272]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833024.6429632-36493-250788348219872/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:17:05 np0005603435 python3[73322]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:17:05 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:06 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:06 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:06 np0005603435 systemd[1]: Starting Ceph OSD losetup...
Jan 30 23:17:06 np0005603435 bash[73362]: /dev/loop5: [64513]:4355754 (/var/lib/ceph-osd-2.img)
Jan 30 23:17:06 np0005603435 systemd[1]: Finished Ceph OSD losetup.
Jan 30 23:17:06 np0005603435 lvm[73363]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:17:06 np0005603435 lvm[73363]: VG ceph_vg2 finished
Jan 30 23:17:08 np0005603435 python3[73387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:17:10 np0005603435 python3[73480]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 30 23:17:12 np0005603435 chronyd[58662]: Selected source 216.232.132.102 (pool.ntp.org)
Jan 30 23:17:12 np0005603435 python3[73537]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 30 23:17:15 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:17:15 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:17:16 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:17:16 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:17:16 np0005603435 systemd[1]: run-r63a3b55749b644b981f21546e8bc3c21.service: Deactivated successfully.
Jan 30 23:17:16 np0005603435 python3[73655]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:17:16 np0005603435 python3[73684]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:17:16 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:17 np0005603435 python3[73722]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:17:17 np0005603435 python3[73748]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:17:18 np0005603435 python3[73826]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:17:18 np0005603435 python3[73899]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833038.0679471-36643-261968709109649/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:17:19 np0005603435 python3[74001]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:17:19 np0005603435 python3[74074]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833039.0281434-36661-155938200093805/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:17:19 np0005603435 python3[74124]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:17:20 np0005603435 python3[74152]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:17:20 np0005603435 python3[74180]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:17:20 np0005603435 python3[74206]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:17:21 np0005603435 python3[74232]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:17:21 np0005603435 systemd[1]: Created slice User Slice of UID 42477.
Jan 30 23:17:21 np0005603435 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 30 23:17:21 np0005603435 systemd-logind[816]: New session 19 of user ceph-admin.
Jan 30 23:17:21 np0005603435 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 30 23:17:21 np0005603435 systemd[1]: Starting User Manager for UID 42477...
Jan 30 23:17:21 np0005603435 systemd[74240]: Queued start job for default target Main User Target.
Jan 30 23:17:21 np0005603435 systemd[74240]: Created slice User Application Slice.
Jan 30 23:17:21 np0005603435 systemd[74240]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 30 23:17:21 np0005603435 systemd[74240]: Started Daily Cleanup of User's Temporary Directories.
Jan 30 23:17:21 np0005603435 systemd[74240]: Reached target Paths.
Jan 30 23:17:21 np0005603435 systemd[74240]: Reached target Timers.
Jan 30 23:17:21 np0005603435 systemd[74240]: Starting D-Bus User Message Bus Socket...
Jan 30 23:17:21 np0005603435 systemd[74240]: Starting Create User's Volatile Files and Directories...
Jan 30 23:17:21 np0005603435 systemd[74240]: Finished Create User's Volatile Files and Directories.
Jan 30 23:17:21 np0005603435 systemd[74240]: Listening on D-Bus User Message Bus Socket.
Jan 30 23:17:21 np0005603435 systemd[74240]: Reached target Sockets.
Jan 30 23:17:21 np0005603435 systemd[74240]: Reached target Basic System.
Jan 30 23:17:21 np0005603435 systemd[74240]: Reached target Main User Target.
Jan 30 23:17:21 np0005603435 systemd[74240]: Startup finished in 100ms.
Jan 30 23:17:21 np0005603435 systemd[1]: Started User Manager for UID 42477.
Jan 30 23:17:21 np0005603435 systemd[1]: Started Session 19 of User ceph-admin.
Jan 30 23:17:21 np0005603435 systemd[1]: session-19.scope: Deactivated successfully.
Jan 30 23:17:21 np0005603435 systemd-logind[816]: Session 19 logged out. Waiting for processes to exit.
Jan 30 23:17:21 np0005603435 systemd-logind[816]: Removed session 19.
Jan 30 23:17:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:24 np0005603435 systemd[1]: var-lib-containers-storage-overlay-compat2803221180-lower\x2dmapped.mount: Deactivated successfully.
Jan 30 23:17:31 np0005603435 systemd[1]: Stopping User Manager for UID 42477...
Jan 30 23:17:31 np0005603435 systemd[74240]: Activating special unit Exit the Session...
Jan 30 23:17:31 np0005603435 systemd[74240]: Stopped target Main User Target.
Jan 30 23:17:31 np0005603435 systemd[74240]: Stopped target Basic System.
Jan 30 23:17:31 np0005603435 systemd[74240]: Stopped target Paths.
Jan 30 23:17:31 np0005603435 systemd[74240]: Stopped target Sockets.
Jan 30 23:17:31 np0005603435 systemd[74240]: Stopped target Timers.
Jan 30 23:17:31 np0005603435 systemd[74240]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 30 23:17:31 np0005603435 systemd[74240]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 30 23:17:31 np0005603435 systemd[74240]: Closed D-Bus User Message Bus Socket.
Jan 30 23:17:31 np0005603435 systemd[74240]: Stopped Create User's Volatile Files and Directories.
Jan 30 23:17:31 np0005603435 systemd[74240]: Removed slice User Application Slice.
Jan 30 23:17:31 np0005603435 systemd[74240]: Reached target Shutdown.
Jan 30 23:17:31 np0005603435 systemd[74240]: Finished Exit the Session.
Jan 30 23:17:31 np0005603435 systemd[74240]: Reached target Exit the Session.
Jan 30 23:17:31 np0005603435 systemd[1]: user@42477.service: Deactivated successfully.
Jan 30 23:17:31 np0005603435 systemd[1]: Stopped User Manager for UID 42477.
Jan 30 23:17:31 np0005603435 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 30 23:17:31 np0005603435 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 30 23:17:31 np0005603435 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 30 23:17:31 np0005603435 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 30 23:17:31 np0005603435 systemd[1]: Removed slice User Slice of UID 42477.
Jan 30 23:17:40 np0005603435 podman[74334]: 2026-01-31 04:17:40.810558479 +0000 UTC m=+18.879151372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:40 np0005603435 podman[74426]: 2026-01-31 04:17:40.910712861 +0000 UTC m=+0.071554303 container create e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8 (image=quay.io/ceph/ceph:v20, name=frosty_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:40 np0005603435 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 30 23:17:40 np0005603435 systemd[1]: Started libpod-conmon-e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8.scope.
Jan 30 23:17:40 np0005603435 podman[74426]: 2026-01-31 04:17:40.876483093 +0000 UTC m=+0.037324545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:40 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:41 np0005603435 podman[74426]: 2026-01-31 04:17:41.057021253 +0000 UTC m=+0.217862655 container init e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8 (image=quay.io/ceph/ceph:v20, name=frosty_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:41 np0005603435 podman[74426]: 2026-01-31 04:17:41.066672489 +0000 UTC m=+0.227513931 container start e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8 (image=quay.io/ceph/ceph:v20, name=frosty_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:41 np0005603435 podman[74426]: 2026-01-31 04:17:41.074398318 +0000 UTC m=+0.235239710 container attach e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8 (image=quay.io/ceph/ceph:v20, name=frosty_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 30 23:17:41 np0005603435 frosty_gagarin[74442]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 30 23:17:41 np0005603435 systemd[1]: libpod-e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8.scope: Deactivated successfully.
Jan 30 23:17:41 np0005603435 podman[74426]: 2026-01-31 04:17:41.195032821 +0000 UTC m=+0.355874263 container died e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8 (image=quay.io/ceph/ceph:v20, name=frosty_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:41 np0005603435 systemd[1]: var-lib-containers-storage-overlay-764d7c127a1bbca256b68c9f19bc20b8a627db85c3488286b418d0c6da92c61e-merged.mount: Deactivated successfully.
Jan 30 23:17:41 np0005603435 podman[74426]: 2026-01-31 04:17:41.284783258 +0000 UTC m=+0.445624700 container remove e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8 (image=quay.io/ceph/ceph:v20, name=frosty_gagarin, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 30 23:17:41 np0005603435 systemd[1]: libpod-conmon-e32c0e11631df53a195c9cf6a8495c9d2a830819a765a50946df0fc5d15e06e8.scope: Deactivated successfully.
Jan 30 23:17:41 np0005603435 podman[74460]: 2026-01-31 04:17:41.368713333 +0000 UTC m=+0.056156826 container create 95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5 (image=quay.io/ceph/ceph:v20, name=nervous_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:41 np0005603435 systemd[1]: Started libpod-conmon-95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5.scope.
Jan 30 23:17:41 np0005603435 podman[74460]: 2026-01-31 04:17:41.336032033 +0000 UTC m=+0.023475526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:41 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:41 np0005603435 podman[74460]: 2026-01-31 04:17:41.484333043 +0000 UTC m=+0.171776586 container init 95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5 (image=quay.io/ceph/ceph:v20, name=nervous_sinoussi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:41 np0005603435 podman[74460]: 2026-01-31 04:17:41.491607941 +0000 UTC m=+0.179051434 container start 95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5 (image=quay.io/ceph/ceph:v20, name=nervous_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:17:41 np0005603435 nervous_sinoussi[74476]: 167 167
Jan 30 23:17:41 np0005603435 systemd[1]: libpod-95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5.scope: Deactivated successfully.
Jan 30 23:17:41 np0005603435 podman[74460]: 2026-01-31 04:17:41.49767819 +0000 UTC m=+0.185121743 container attach 95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5 (image=quay.io/ceph/ceph:v20, name=nervous_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:17:41 np0005603435 podman[74460]: 2026-01-31 04:17:41.49808401 +0000 UTC m=+0.185527553 container died 95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5 (image=quay.io/ceph/ceph:v20, name=nervous_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:17:41 np0005603435 podman[74460]: 2026-01-31 04:17:41.554510301 +0000 UTC m=+0.241953754 container remove 95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5 (image=quay.io/ceph/ceph:v20, name=nervous_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:17:41 np0005603435 systemd[1]: libpod-conmon-95974820a544ad47a2afbb03280b4c6bd2737f87682e2747273e8c0d6454a9e5.scope: Deactivated successfully.
Jan 30 23:17:41 np0005603435 podman[74495]: 2026-01-31 04:17:41.621361397 +0000 UTC m=+0.049996615 container create 7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:41 np0005603435 systemd[1]: Started libpod-conmon-7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e.scope.
Jan 30 23:17:41 np0005603435 podman[74495]: 2026-01-31 04:17:41.591956648 +0000 UTC m=+0.020591916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:41 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:41 np0005603435 podman[74495]: 2026-01-31 04:17:41.735302527 +0000 UTC m=+0.163937755 container init 7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:17:41 np0005603435 podman[74495]: 2026-01-31 04:17:41.74319792 +0000 UTC m=+0.171833128 container start 7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:17:41 np0005603435 condescending_lederberg[74511]: AQBlgn1pTC0NLhAA0quAReHK1RfVzm7jQDy2DA==
Jan 30 23:17:41 np0005603435 systemd[1]: libpod-7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e.scope: Deactivated successfully.
Jan 30 23:17:41 np0005603435 podman[74495]: 2026-01-31 04:17:41.779206881 +0000 UTC m=+0.207842159 container attach 7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:41 np0005603435 podman[74495]: 2026-01-31 04:17:41.779719604 +0000 UTC m=+0.208354822 container died 7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 30 23:17:41 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0a6c48bc5c35ce75bb3c8ad6a263c5d0882192dce138964a1729702153314ac7-merged.mount: Deactivated successfully.
Jan 30 23:17:41 np0005603435 podman[74495]: 2026-01-31 04:17:41.835657993 +0000 UTC m=+0.264293211 container remove 7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e (image=quay.io/ceph/ceph:v20, name=condescending_lederberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:17:41 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:41 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:41 np0005603435 systemd[1]: libpod-conmon-7645f7b9ae93cf97da68d253c9acdd641e37582baf9486c846df473142ec983e.scope: Deactivated successfully.
Jan 30 23:17:41 np0005603435 podman[74530]: 2026-01-31 04:17:41.923404611 +0000 UTC m=+0.063969407 container create 2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629 (image=quay.io/ceph/ceph:v20, name=silly_mestorf, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:41 np0005603435 systemd[1]: Started libpod-conmon-2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629.scope.
Jan 30 23:17:41 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:41 np0005603435 podman[74530]: 2026-01-31 04:17:41.985105982 +0000 UTC m=+0.125670798 container init 2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629 (image=quay.io/ceph/ceph:v20, name=silly_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:17:41 np0005603435 podman[74530]: 2026-01-31 04:17:41.896751059 +0000 UTC m=+0.037315945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:41 np0005603435 podman[74530]: 2026-01-31 04:17:41.990197286 +0000 UTC m=+0.130762082 container start 2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629 (image=quay.io/ceph/ceph:v20, name=silly_mestorf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:17:42 np0005603435 podman[74530]: 2026-01-31 04:17:42.01158868 +0000 UTC m=+0.152153486 container attach 2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629 (image=quay.io/ceph/ceph:v20, name=silly_mestorf, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:42 np0005603435 silly_mestorf[74547]: AQBmgn1pPtR2ARAAozYGQ/+X/1rySoaTElDDOA==
Jan 30 23:17:42 np0005603435 systemd[1]: libpod-2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629.scope: Deactivated successfully.
Jan 30 23:17:42 np0005603435 podman[74530]: 2026-01-31 04:17:42.032147233 +0000 UTC m=+0.172712069 container died 2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629 (image=quay.io/ceph/ceph:v20, name=silly_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:42 np0005603435 podman[74530]: 2026-01-31 04:17:42.09777171 +0000 UTC m=+0.238336516 container remove 2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629 (image=quay.io/ceph/ceph:v20, name=silly_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:17:42 np0005603435 systemd[1]: libpod-conmon-2dfff487b13568cefbbb452df832ae94344add19b6485f7b459761f835bb9629.scope: Deactivated successfully.
Jan 30 23:17:42 np0005603435 podman[74567]: 2026-01-31 04:17:42.152886059 +0000 UTC m=+0.035517901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:42 np0005603435 podman[74567]: 2026-01-31 04:17:42.397013265 +0000 UTC m=+0.279645057 container create a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a (image=quay.io/ceph/ceph:v20, name=busy_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:43 np0005603435 systemd[1]: Started libpod-conmon-a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a.scope.
Jan 30 23:17:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:43 np0005603435 podman[74567]: 2026-01-31 04:17:43.130972653 +0000 UTC m=+1.013604485 container init a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a (image=quay.io/ceph/ceph:v20, name=busy_easley, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:43 np0005603435 podman[74567]: 2026-01-31 04:17:43.137951713 +0000 UTC m=+1.020583505 container start a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a (image=quay.io/ceph/ceph:v20, name=busy_easley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:43 np0005603435 podman[74567]: 2026-01-31 04:17:43.155529264 +0000 UTC m=+1.038161056 container attach a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a (image=quay.io/ceph/ceph:v20, name=busy_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:17:43 np0005603435 busy_easley[74584]: AQBngn1pi95kCRAA63FLz7BZGSCPOX2HpOIaGw==
Jan 30 23:17:43 np0005603435 systemd[1]: libpod-a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a.scope: Deactivated successfully.
Jan 30 23:17:43 np0005603435 podman[74567]: 2026-01-31 04:17:43.160481625 +0000 UTC m=+1.043113417 container died a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a (image=quay.io/ceph/ceph:v20, name=busy_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:17:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay-636b63860a4808b0554795c8fdad47202ac5a4d9008f47b19f475b8a2b605a5a-merged.mount: Deactivated successfully.
Jan 30 23:17:43 np0005603435 podman[74567]: 2026-01-31 04:17:43.253204075 +0000 UTC m=+1.135835867 container remove a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a (image=quay.io/ceph/ceph:v20, name=busy_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:17:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:43 np0005603435 systemd[1]: libpod-conmon-a8741576467ca102ad7d25c368d1a5a63c574add6492f4ade24dbf96d3af8a8a.scope: Deactivated successfully.
Jan 30 23:17:43 np0005603435 podman[74605]: 2026-01-31 04:17:43.336364071 +0000 UTC m=+0.059256822 container create 0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce (image=quay.io/ceph/ceph:v20, name=zealous_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:43 np0005603435 systemd[1]: Started libpod-conmon-0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce.scope.
Jan 30 23:17:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31de6781020d8cb300627e953f2866042529a72bf32c4bb5136aac150a4e96a7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:43 np0005603435 podman[74605]: 2026-01-31 04:17:43.313532482 +0000 UTC m=+0.036425293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:43 np0005603435 podman[74605]: 2026-01-31 04:17:43.409776908 +0000 UTC m=+0.132669679 container init 0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce (image=quay.io/ceph/ceph:v20, name=zealous_dubinsky, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:17:43 np0005603435 podman[74605]: 2026-01-31 04:17:43.416334068 +0000 UTC m=+0.139226819 container start 0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce (image=quay.io/ceph/ceph:v20, name=zealous_dubinsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:17:43 np0005603435 podman[74605]: 2026-01-31 04:17:43.422971261 +0000 UTC m=+0.145864062 container attach 0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce (image=quay.io/ceph/ceph:v20, name=zealous_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:43 np0005603435 zealous_dubinsky[74622]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 30 23:17:43 np0005603435 zealous_dubinsky[74622]: setting min_mon_release = tentacle
Jan 30 23:17:43 np0005603435 zealous_dubinsky[74622]: /usr/bin/monmaptool: set fsid to 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:43 np0005603435 zealous_dubinsky[74622]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 30 23:17:43 np0005603435 systemd[1]: libpod-0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce.scope: Deactivated successfully.
Jan 30 23:17:43 np0005603435 podman[74605]: 2026-01-31 04:17:43.463501243 +0000 UTC m=+0.186394004 container died 0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce (image=quay.io/ceph/ceph:v20, name=zealous_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:17:43 np0005603435 podman[74605]: 2026-01-31 04:17:43.569319323 +0000 UTC m=+0.292212074 container remove 0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce (image=quay.io/ceph/ceph:v20, name=zealous_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:17:43 np0005603435 systemd[1]: libpod-conmon-0b96b29c9b0f250e7d3caaffcbd79ec7eb9999742feb336c0f06fea182b602ce.scope: Deactivated successfully.
Jan 30 23:17:43 np0005603435 podman[74641]: 2026-01-31 04:17:43.671713469 +0000 UTC m=+0.085211656 container create ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a (image=quay.io/ceph/ceph:v20, name=charming_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:17:43 np0005603435 systemd[1]: Started libpod-conmon-ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a.scope.
Jan 30 23:17:43 np0005603435 podman[74641]: 2026-01-31 04:17:43.607556989 +0000 UTC m=+0.021055176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999df76ef38fe7fd879c726c98181bfc277ecf3209c8f66ebc247f6d6694e045/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999df76ef38fe7fd879c726c98181bfc277ecf3209c8f66ebc247f6d6694e045/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999df76ef38fe7fd879c726c98181bfc277ecf3209c8f66ebc247f6d6694e045/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999df76ef38fe7fd879c726c98181bfc277ecf3209c8f66ebc247f6d6694e045/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:43 np0005603435 podman[74641]: 2026-01-31 04:17:43.739407797 +0000 UTC m=+0.152906014 container init ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a (image=quay.io/ceph/ceph:v20, name=charming_meitner, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:17:43 np0005603435 podman[74641]: 2026-01-31 04:17:43.743699962 +0000 UTC m=+0.157198169 container start ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a (image=quay.io/ceph/ceph:v20, name=charming_meitner, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:17:43 np0005603435 podman[74641]: 2026-01-31 04:17:43.747935986 +0000 UTC m=+0.161434163 container attach ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a (image=quay.io/ceph/ceph:v20, name=charming_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:17:43 np0005603435 systemd[1]: libpod-ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a.scope: Deactivated successfully.
Jan 30 23:17:43 np0005603435 podman[74641]: 2026-01-31 04:17:43.898544783 +0000 UTC m=+0.312043000 container died ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a (image=quay.io/ceph/ceph:v20, name=charming_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:17:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay-999df76ef38fe7fd879c726c98181bfc277ecf3209c8f66ebc247f6d6694e045-merged.mount: Deactivated successfully.
Jan 30 23:17:43 np0005603435 podman[74641]: 2026-01-31 04:17:43.957211269 +0000 UTC m=+0.370709476 container remove ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a (image=quay.io/ceph/ceph:v20, name=charming_meitner, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:17:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:43 np0005603435 systemd[1]: libpod-conmon-ec5a5cd4320094d3e37c9d928b5245584119fc5efa5aff85f4a00cff1e57735a.scope: Deactivated successfully.
Jan 30 23:17:44 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:44 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:44 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:44 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:44 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:44 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:44 np0005603435 systemd[1]: Reached target All Ceph clusters and services.
Jan 30 23:17:44 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:44 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:44 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:44 np0005603435 systemd[1]: Reached target Ceph cluster 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:17:44 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:44 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:44 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:44 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:44 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:44 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:45 np0005603435 systemd[1]: Created slice Slice /system/ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:17:45 np0005603435 systemd[1]: Reached target System Time Set.
Jan 30 23:17:45 np0005603435 systemd[1]: Reached target System Time Synchronized.
Jan 30 23:17:45 np0005603435 systemd[1]: Starting Ceph mon.compute-0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:17:45 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:45 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:45 np0005603435 podman[74936]: 2026-01-31 04:17:45.436578143 +0000 UTC m=+0.113236823 container create a4a3c4beeae43446de3d7130f1b6c09ee148691cad67978a87b1f33d128816f8 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:17:45 np0005603435 podman[74936]: 2026-01-31 04:17:45.346139789 +0000 UTC m=+0.022798479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5540b1605ee7a74f7047186903d9d4c357138ce6b7b9fe48bece4936a2dccb1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5540b1605ee7a74f7047186903d9d4c357138ce6b7b9fe48bece4936a2dccb1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5540b1605ee7a74f7047186903d9d4c357138ce6b7b9fe48bece4936a2dccb1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5540b1605ee7a74f7047186903d9d4c357138ce6b7b9fe48bece4936a2dccb1c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:45 np0005603435 podman[74936]: 2026-01-31 04:17:45.499659717 +0000 UTC m=+0.176318487 container init a4a3c4beeae43446de3d7130f1b6c09ee148691cad67978a87b1f33d128816f8 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:45 np0005603435 podman[74936]: 2026-01-31 04:17:45.503211384 +0000 UTC m=+0.179870094 container start a4a3c4beeae43446de3d7130f1b6c09ee148691cad67978a87b1f33d128816f8 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:17:45 np0005603435 bash[74936]: a4a3c4beeae43446de3d7130f1b6c09ee148691cad67978a87b1f33d128816f8
Jan 30 23:17:45 np0005603435 systemd[1]: Started Ceph mon.compute-0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: pidfile_write: ignore empty --pid-file
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: load: jerasure load: lrc 
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: RocksDB version: 7.9.2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Git sha 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: DB SUMMARY
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: DB Session ID:  5AGG3I0WAN7FFZAOTSIN
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: CURRENT file:  CURRENT
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: IDENTITY file:  IDENTITY
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                         Options.error_if_exists: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                       Options.create_if_missing: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                         Options.paranoid_checks: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                                     Options.env: 0x561f10284440
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                                Options.info_log: 0x561f122353e0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.max_file_opening_threads: 16
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                              Options.statistics: (nil)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                               Options.use_fsync: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                       Options.max_log_file_size: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                         Options.allow_fallocate: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                        Options.use_direct_reads: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:          Options.create_missing_column_families: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                              Options.db_log_dir: 
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                                 Options.wal_dir: 
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                   Options.advise_random_on_open: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                    Options.write_buffer_manager: 0x561f121b4140
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                            Options.rate_limiter: (nil)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.unordered_write: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                               Options.row_cache: None
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                              Options.wal_filter: None
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.allow_ingest_behind: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.two_write_queues: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.manual_wal_flush: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.wal_compression: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.atomic_flush: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                 Options.log_readahead_size: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.allow_data_in_errors: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.db_host_id: __hostname__
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.max_background_jobs: 2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.max_background_compactions: -1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.max_subcompactions: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.max_total_wal_size: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                          Options.max_open_files: -1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                          Options.bytes_per_sync: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:       Options.compaction_readahead_size: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.max_background_flushes: -1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Compression algorithms supported:
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: #011kZSTD supported: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: #011kXpressCompression supported: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: #011kBZip2Compression supported: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: #011kLZ4Compression supported: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: #011kZlibCompression supported: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: #011kSnappyCompression supported: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:           Options.merge_operator: 
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:        Options.compaction_filter: None
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f121c0600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f121a58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:        Options.write_buffer_size: 33554432
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:  Options.max_write_buffer_number: 2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:          Options.compression: NoCompression
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.num_levels: 7
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833065558417, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833065572733, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "5AGG3I0WAN7FFZAOTSIN", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833065572880, "job": 1, "event": "recovery_finished"}
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 30 23:17:45 np0005603435 podman[74959]: 2026-01-31 04:17:45.618856645 +0000 UTC m=+0.062403079 container create 02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168 (image=quay.io/ceph/ceph:v20, name=boring_snyder, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561f121d2e00
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: DB pointer 0x561f1231e000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f121a58d0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@-1(???) e0 preinit fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T04:17:43.460314+0000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : created 2026-01-31T04:17:43.460314+0000
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T04:17:43.806207Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,os=Linux}
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:17:45 np0005603435 systemd[1]: Started libpod-conmon-02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168.scope.
Jan 30 23:17:45 np0005603435 podman[74959]: 2026-01-31 04:17:45.58187443 +0000 UTC m=+0.025420944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).mds e1 new map
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-31T04:17:45:671431+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : fsmap 
Jan 30 23:17:45 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a566fffd5a0ca684573b09c884a877a3dda685b8c65a8b2a441fd1b4e6aebc9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a566fffd5a0ca684573b09c884a877a3dda685b8c65a8b2a441fd1b4e6aebc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a566fffd5a0ca684573b09c884a877a3dda685b8c65a8b2a441fd1b4e6aebc9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mkfs 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 30 23:17:45 np0005603435 podman[74959]: 2026-01-31 04:17:45.718259368 +0000 UTC m=+0.161805832 container init 02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168 (image=quay.io/ceph/ceph:v20, name=boring_snyder, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 30 23:17:45 np0005603435 podman[74959]: 2026-01-31 04:17:45.72772894 +0000 UTC m=+0.171275374 container start 02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168 (image=quay.io/ceph/ceph:v20, name=boring_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 30 23:17:45 np0005603435 podman[74959]: 2026-01-31 04:17:45.7391636 +0000 UTC m=+0.182710034 container attach 02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168 (image=quay.io/ceph/ceph:v20, name=boring_snyder, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 30 23:17:45 np0005603435 ceph-mon[74955]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197422947' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:  cluster:
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    id:     95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    health: HEALTH_OK
Jan 30 23:17:45 np0005603435 boring_snyder[75010]: 
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:  services:
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    mon: 1 daemons, quorum compute-0 (age 0.289897s) [leader: compute-0]
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    mgr: no daemons active
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    osd: 0 osds: 0 up, 0 in
Jan 30 23:17:45 np0005603435 boring_snyder[75010]: 
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:  data:
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    pools:   0 pools, 0 pgs
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    objects: 0 objects, 0 B
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    usage:   0 B used, 0 B / 0 B avail
Jan 30 23:17:45 np0005603435 boring_snyder[75010]:    pgs:     
Jan 30 23:17:45 np0005603435 boring_snyder[75010]: 
Jan 30 23:17:45 np0005603435 systemd[1]: libpod-02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168.scope: Deactivated successfully.
Jan 30 23:17:45 np0005603435 podman[74959]: 2026-01-31 04:17:45.969713474 +0000 UTC m=+0.413259918 container died 02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168 (image=quay.io/ceph/ceph:v20, name=boring_snyder, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:17:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0a566fffd5a0ca684573b09c884a877a3dda685b8c65a8b2a441fd1b4e6aebc9-merged.mount: Deactivated successfully.
Jan 30 23:17:46 np0005603435 podman[74959]: 2026-01-31 04:17:46.230643152 +0000 UTC m=+0.674189596 container remove 02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168 (image=quay.io/ceph/ceph:v20, name=boring_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:17:46 np0005603435 podman[75048]: 2026-01-31 04:17:46.287659377 +0000 UTC m=+0.042308656 container create c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57 (image=quay.io/ceph/ceph:v20, name=relaxed_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:17:46 np0005603435 systemd[1]: Started libpod-conmon-c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57.scope.
Jan 30 23:17:46 np0005603435 systemd[1]: libpod-conmon-02754a99d2233e67794b416c4f320e56dab343c4e64d3fc533e64cb711a51168.scope: Deactivated successfully.
Jan 30 23:17:46 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1795ff4e054d86f7b3dfacb351d896d557f9a7aad8779dfa978984ac3fafe2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1795ff4e054d86f7b3dfacb351d896d557f9a7aad8779dfa978984ac3fafe2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1795ff4e054d86f7b3dfacb351d896d557f9a7aad8779dfa978984ac3fafe2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c1795ff4e054d86f7b3dfacb351d896d557f9a7aad8779dfa978984ac3fafe2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:46 np0005603435 podman[75048]: 2026-01-31 04:17:46.26815797 +0000 UTC m=+0.022807229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:46 np0005603435 podman[75048]: 2026-01-31 04:17:46.3674352 +0000 UTC m=+0.122084469 container init c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57 (image=quay.io/ceph/ceph:v20, name=relaxed_goldstine, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:17:46 np0005603435 podman[75048]: 2026-01-31 04:17:46.373448367 +0000 UTC m=+0.128097616 container start c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57 (image=quay.io/ceph/ceph:v20, name=relaxed_goldstine, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:46 np0005603435 podman[75048]: 2026-01-31 04:17:46.376962414 +0000 UTC m=+0.131611683 container attach c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57 (image=quay.io/ceph/ceph:v20, name=relaxed_goldstine, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:46 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 30 23:17:46 np0005603435 ceph-mon[74955]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2910770425' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 30 23:17:46 np0005603435 ceph-mon[74955]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2910770425' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 30 23:17:46 np0005603435 relaxed_goldstine[75064]: 
Jan 30 23:17:46 np0005603435 relaxed_goldstine[75064]: [global]
Jan 30 23:17:46 np0005603435 relaxed_goldstine[75064]: #011fsid = 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:46 np0005603435 relaxed_goldstine[75064]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 30 23:17:46 np0005603435 relaxed_goldstine[75064]: #011osd_crush_chooseleaf_type = 0
Jan 30 23:17:46 np0005603435 systemd[1]: libpod-c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57.scope: Deactivated successfully.
Jan 30 23:17:46 np0005603435 podman[75048]: 2026-01-31 04:17:46.630120621 +0000 UTC m=+0.384769900 container died c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57 (image=quay.io/ceph/ceph:v20, name=relaxed_goldstine, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:17:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3c1795ff4e054d86f7b3dfacb351d896d557f9a7aad8779dfa978984ac3fafe2-merged.mount: Deactivated successfully.
Jan 30 23:17:46 np0005603435 ceph-mon[74955]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 30 23:17:46 np0005603435 ceph-mon[74955]: from='client.? 192.168.122.100:0/2910770425' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 30 23:17:46 np0005603435 ceph-mon[74955]: from='client.? 192.168.122.100:0/2910770425' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 30 23:17:46 np0005603435 podman[75048]: 2026-01-31 04:17:46.864754885 +0000 UTC m=+0.619404164 container remove c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57 (image=quay.io/ceph/ceph:v20, name=relaxed_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:46 np0005603435 systemd[1]: libpod-conmon-c0e6e1be06aac64abac68eef485e45defcb4ee28ae1ac488e75927ec3af32f57.scope: Deactivated successfully.
Jan 30 23:17:46 np0005603435 podman[75103]: 2026-01-31 04:17:46.968045273 +0000 UTC m=+0.085593866 container create b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a (image=quay.io/ceph/ceph:v20, name=nifty_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:47 np0005603435 podman[75103]: 2026-01-31 04:17:46.907526542 +0000 UTC m=+0.025075095 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:47 np0005603435 systemd[1]: Started libpod-conmon-b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a.scope.
Jan 30 23:17:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0450e178c0da02c2246311a24b7bb5116c05d3d6ba09d4a742ab5e0f9de2e58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0450e178c0da02c2246311a24b7bb5116c05d3d6ba09d4a742ab5e0f9de2e58/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0450e178c0da02c2246311a24b7bb5116c05d3d6ba09d4a742ab5e0f9de2e58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0450e178c0da02c2246311a24b7bb5116c05d3d6ba09d4a742ab5e0f9de2e58/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:47 np0005603435 podman[75103]: 2026-01-31 04:17:47.07536135 +0000 UTC m=+0.192909963 container init b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a (image=quay.io/ceph/ceph:v20, name=nifty_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:17:47 np0005603435 podman[75103]: 2026-01-31 04:17:47.081929661 +0000 UTC m=+0.199478244 container start b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a (image=quay.io/ceph/ceph:v20, name=nifty_wiles, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:17:47 np0005603435 podman[75103]: 2026-01-31 04:17:47.105937089 +0000 UTC m=+0.223485712 container attach b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a (image=quay.io/ceph/ceph:v20, name=nifty_wiles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:17:47 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:17:47 np0005603435 ceph-mon[74955]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605491969' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:17:47 np0005603435 systemd[1]: libpod-b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a.scope: Deactivated successfully.
Jan 30 23:17:47 np0005603435 podman[75103]: 2026-01-31 04:17:47.332506595 +0000 UTC m=+0.450055178 container died b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a (image=quay.io/ceph/ceph:v20, name=nifty_wiles, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b0450e178c0da02c2246311a24b7bb5116c05d3d6ba09d4a742ab5e0f9de2e58-merged.mount: Deactivated successfully.
Jan 30 23:17:47 np0005603435 podman[75103]: 2026-01-31 04:17:47.466716041 +0000 UTC m=+0.584264624 container remove b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a (image=quay.io/ceph/ceph:v20, name=nifty_wiles, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:17:47 np0005603435 systemd[1]: libpod-conmon-b455dd890fd36330d8bd56f8beefd2f89ee95e288f3af80f2bb422bbf05a752a.scope: Deactivated successfully.
Jan 30 23:17:47 np0005603435 systemd[1]: Stopping Ceph mon.compute-0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:17:47 np0005603435 ceph-mon[74955]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 30 23:17:47 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 30 23:17:47 np0005603435 ceph-mon[74955]: mon.compute-0@0(leader) e1 shutdown
Jan 30 23:17:47 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0[74951]: 2026-01-31T04:17:47.663+0000 7efe135b0640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 30 23:17:47 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0[74951]: 2026-01-31T04:17:47.663+0000 7efe135b0640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 30 23:17:47 np0005603435 ceph-mon[74955]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 30 23:17:47 np0005603435 ceph-mon[74955]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 30 23:17:47 np0005603435 podman[75184]: 2026-01-31 04:17:47.694515066 +0000 UTC m=+0.089502421 container died a4a3c4beeae43446de3d7130f1b6c09ee148691cad67978a87b1f33d128816f8 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:17:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5540b1605ee7a74f7047186903d9d4c357138ce6b7b9fe48bece4936a2dccb1c-merged.mount: Deactivated successfully.
Jan 30 23:17:47 np0005603435 podman[75184]: 2026-01-31 04:17:47.901566495 +0000 UTC m=+0.296553850 container remove a4a3c4beeae43446de3d7130f1b6c09ee148691cad67978a87b1f33d128816f8 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:17:47 np0005603435 bash[75184]: ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0
Jan 30 23:17:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 30 23:17:48 np0005603435 systemd[1]: ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@mon.compute-0.service: Deactivated successfully.
Jan 30 23:17:48 np0005603435 systemd[1]: Stopped Ceph mon.compute-0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:17:48 np0005603435 systemd[1]: Starting Ceph mon.compute-0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:17:48 np0005603435 podman[75287]: 2026-01-31 04:17:48.263949096 +0000 UTC m=+0.045143776 container create 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1617355a5eea17179aee0e34f6a9ac79fc353c7190f396a32c02dd0e0bee2ec8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1617355a5eea17179aee0e34f6a9ac79fc353c7190f396a32c02dd0e0bee2ec8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1617355a5eea17179aee0e34f6a9ac79fc353c7190f396a32c02dd0e0bee2ec8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1617355a5eea17179aee0e34f6a9ac79fc353c7190f396a32c02dd0e0bee2ec8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 podman[75287]: 2026-01-31 04:17:48.246804286 +0000 UTC m=+0.027999006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:48 np0005603435 podman[75287]: 2026-01-31 04:17:48.346372923 +0000 UTC m=+0.127567693 container init 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:17:48 np0005603435 podman[75287]: 2026-01-31 04:17:48.355987799 +0000 UTC m=+0.137182529 container start 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:48 np0005603435 bash[75287]: 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79
Jan 30 23:17:48 np0005603435 systemd[1]: Started Ceph mon.compute-0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: pidfile_write: ignore empty --pid-file
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: load: jerasure load: lrc 
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: RocksDB version: 7.9.2
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Git sha 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: DB SUMMARY
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: DB Session ID:  NJWQW6YWV3BHT45TVIYK
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: CURRENT file:  CURRENT
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: IDENTITY file:  IDENTITY
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                         Options.error_if_exists: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                       Options.create_if_missing: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                         Options.paranoid_checks: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                                     Options.env: 0x557356b79440
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                                Options.info_log: 0x5573584d5e80
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.max_file_opening_threads: 16
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                              Options.statistics: (nil)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                               Options.use_fsync: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                       Options.max_log_file_size: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                         Options.allow_fallocate: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                        Options.use_direct_reads: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:          Options.create_missing_column_families: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                              Options.db_log_dir: 
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                                 Options.wal_dir: 
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                   Options.advise_random_on_open: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                    Options.write_buffer_manager: 0x557358520140
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                            Options.rate_limiter: (nil)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.unordered_write: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                               Options.row_cache: None
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                              Options.wal_filter: None
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.allow_ingest_behind: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.two_write_queues: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.manual_wal_flush: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.wal_compression: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.atomic_flush: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                 Options.log_readahead_size: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.allow_data_in_errors: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.db_host_id: __hostname__
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.max_background_jobs: 2
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.max_background_compactions: -1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.max_subcompactions: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.max_total_wal_size: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                          Options.max_open_files: -1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                          Options.bytes_per_sync: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:       Options.compaction_readahead_size: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.max_background_flushes: -1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Compression algorithms supported:
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: #011kZSTD supported: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: #011kXpressCompression supported: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: #011kBZip2Compression supported: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: #011kLZ4Compression supported: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: #011kZlibCompression supported: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: #011kSnappyCompression supported: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:           Options.merge_operator: 
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:        Options.compaction_filter: None
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55735852ca00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5573585118d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:        Options.write_buffer_size: 33554432
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:  Options.max_write_buffer_number: 2
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:          Options.compression: NoCompression
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.num_levels: 7
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833068419094, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833068446453, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833068, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833068446577, "job": 1, "event": "recovery_finished"}
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 30 23:17:48 np0005603435 podman[75308]: 2026-01-31 04:17:48.451887096 +0000 UTC m=+0.074057984 container create 5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019 (image=quay.io/ceph/ceph:v20, name=inspiring_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55735853ee00
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: DB pointer 0x557358688000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.1      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.1      0.03              0.00         1    0.027       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.1      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.1      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.77 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.77 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5573585118d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:48 np0005603435 podman[75308]: 2026-01-31 04:17:48.400268363 +0000 UTC m=+0.022439301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(???) e1 preinit fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(???).mds e1 new map
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-31T04:17:45:671431+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T04:17:43.460314+0000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : created 2026-01-31T04:17:43.460314+0000
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : fsmap 
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 30 23:17:48 np0005603435 systemd[1]: Started libpod-conmon-5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019.scope.
Jan 30 23:17:48 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3e99367fcd9d3bd7b71cf2571bfd09df07cd1ac56963e35ad72aa857d42aee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3e99367fcd9d3bd7b71cf2571bfd09df07cd1ac56963e35ad72aa857d42aee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3e99367fcd9d3bd7b71cf2571bfd09df07cd1ac56963e35ad72aa857d42aee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 30 23:17:48 np0005603435 podman[75308]: 2026-01-31 04:17:48.589561086 +0000 UTC m=+0.211732024 container init 5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019 (image=quay.io/ceph/ceph:v20, name=inspiring_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:17:48 np0005603435 podman[75308]: 2026-01-31 04:17:48.597954182 +0000 UTC m=+0.220125080 container start 5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019 (image=quay.io/ceph/ceph:v20, name=inspiring_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:17:48 np0005603435 podman[75308]: 2026-01-31 04:17:48.608561482 +0000 UTC m=+0.230732370 container attach 5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019 (image=quay.io/ceph/ceph:v20, name=inspiring_carson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:17:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 30 23:17:48 np0005603435 systemd[1]: libpod-5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019.scope: Deactivated successfully.
Jan 30 23:17:48 np0005603435 podman[75308]: 2026-01-31 04:17:48.803396351 +0000 UTC m=+0.425567239 container died 5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019 (image=quay.io/ceph/ceph:v20, name=inspiring_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5a3e99367fcd9d3bd7b71cf2571bfd09df07cd1ac56963e35ad72aa857d42aee-merged.mount: Deactivated successfully.
Jan 30 23:17:48 np0005603435 podman[75308]: 2026-01-31 04:17:48.851002876 +0000 UTC m=+0.473173774 container remove 5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019 (image=quay.io/ceph/ceph:v20, name=inspiring_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:17:48 np0005603435 systemd[1]: libpod-conmon-5a615acce9808a218217315648d457cdc93df21ce805ef612a77e29899fd4019.scope: Deactivated successfully.
Jan 30 23:17:48 np0005603435 podman[75400]: 2026-01-31 04:17:48.913521857 +0000 UTC m=+0.048649442 container create 8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15 (image=quay.io/ceph/ceph:v20, name=awesome_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:17:48 np0005603435 systemd[1]: Started libpod-conmon-8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15.scope.
Jan 30 23:17:48 np0005603435 podman[75400]: 2026-01-31 04:17:48.887171722 +0000 UTC m=+0.022299367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:48 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cefab308793554722d67b9997fd1bf4c3c98193d491d73bbd91d2ce23871a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cefab308793554722d67b9997fd1bf4c3c98193d491d73bbd91d2ce23871a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cefab308793554722d67b9997fd1bf4c3c98193d491d73bbd91d2ce23871a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:49 np0005603435 podman[75400]: 2026-01-31 04:17:49.003997492 +0000 UTC m=+0.139125157 container init 8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15 (image=quay.io/ceph/ceph:v20, name=awesome_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:49 np0005603435 podman[75400]: 2026-01-31 04:17:49.010612994 +0000 UTC m=+0.145740579 container start 8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15 (image=quay.io/ceph/ceph:v20, name=awesome_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:49 np0005603435 podman[75400]: 2026-01-31 04:17:49.014280143 +0000 UTC m=+0.149407778 container attach 8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15 (image=quay.io/ceph/ceph:v20, name=awesome_curran, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:17:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 30 23:17:49 np0005603435 systemd[1]: libpod-8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15.scope: Deactivated successfully.
Jan 30 23:17:49 np0005603435 podman[75400]: 2026-01-31 04:17:49.270960067 +0000 UTC m=+0.406087632 container died 8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15 (image=quay.io/ceph/ceph:v20, name=awesome_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:17:49 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b5cefab308793554722d67b9997fd1bf4c3c98193d491d73bbd91d2ce23871a1-merged.mount: Deactivated successfully.
Jan 30 23:17:49 np0005603435 podman[75400]: 2026-01-31 04:17:49.307585793 +0000 UTC m=+0.442713348 container remove 8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15 (image=quay.io/ceph/ceph:v20, name=awesome_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:17:49 np0005603435 systemd[1]: libpod-conmon-8f49bdd2a625121d78557500507af8e465202687ff3c5d316af44ebf9c85ea15.scope: Deactivated successfully.
Jan 30 23:17:49 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:49 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:49 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:49 np0005603435 systemd[1]: Reloading.
Jan 30 23:17:49 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:17:49 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:17:49 np0005603435 systemd[1]: Starting Ceph mgr.compute-0.wyngmr for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:17:50 np0005603435 podman[75579]: 2026-01-31 04:17:50.081169461 +0000 UTC m=+0.052349953 container create 2145ef748f54120a2c9a9597a8b05b8ee5aa896db00b94464c22a2b80b8b048f (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:17:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81ffcba5f0c94d8cd14e5a5695b631649349cbce72470070ee312495945e816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81ffcba5f0c94d8cd14e5a5695b631649349cbce72470070ee312495945e816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81ffcba5f0c94d8cd14e5a5695b631649349cbce72470070ee312495945e816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81ffcba5f0c94d8cd14e5a5695b631649349cbce72470070ee312495945e816/merged/var/lib/ceph/mgr/ceph-compute-0.wyngmr supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:50 np0005603435 podman[75579]: 2026-01-31 04:17:50.049479545 +0000 UTC m=+0.020660097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:50 np0005603435 podman[75579]: 2026-01-31 04:17:50.146831158 +0000 UTC m=+0.118011730 container init 2145ef748f54120a2c9a9597a8b05b8ee5aa896db00b94464c22a2b80b8b048f (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:50 np0005603435 podman[75579]: 2026-01-31 04:17:50.151708957 +0000 UTC m=+0.122889469 container start 2145ef748f54120a2c9a9597a8b05b8ee5aa896db00b94464c22a2b80b8b048f (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:17:50 np0005603435 bash[75579]: 2145ef748f54120a2c9a9597a8b05b8ee5aa896db00b94464c22a2b80b8b048f
Jan 30 23:17:50 np0005603435 systemd[1]: Started Ceph mgr.compute-0.wyngmr for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:17:50 np0005603435 ceph-mgr[75599]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:17:50 np0005603435 ceph-mgr[75599]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 30 23:17:50 np0005603435 ceph-mgr[75599]: pidfile_write: ignore empty --pid-file
Jan 30 23:17:50 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'alerts'
Jan 30 23:17:50 np0005603435 podman[75600]: 2026-01-31 04:17:50.276982944 +0000 UTC m=+0.075853058 container create bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845 (image=quay.io/ceph/ceph:v20, name=clever_black, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:17:50 np0005603435 systemd[1]: Started libpod-conmon-bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845.scope.
Jan 30 23:17:50 np0005603435 podman[75600]: 2026-01-31 04:17:50.24088882 +0000 UTC m=+0.039758974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:50 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/751ecd38d625f10da5007c92b38ebda578a75b68633b8800ab7d4ce9c5cc4e14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/751ecd38d625f10da5007c92b38ebda578a75b68633b8800ab7d4ce9c5cc4e14/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/751ecd38d625f10da5007c92b38ebda578a75b68633b8800ab7d4ce9c5cc4e14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:50 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'balancer'
Jan 30 23:17:50 np0005603435 podman[75600]: 2026-01-31 04:17:50.372346568 +0000 UTC m=+0.171216722 container init bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845 (image=quay.io/ceph/ceph:v20, name=clever_black, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:50 np0005603435 podman[75600]: 2026-01-31 04:17:50.381751389 +0000 UTC m=+0.180621513 container start bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845 (image=quay.io/ceph/ceph:v20, name=clever_black, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:17:50 np0005603435 podman[75600]: 2026-01-31 04:17:50.386052154 +0000 UTC m=+0.184922278 container attach bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845 (image=quay.io/ceph/ceph:v20, name=clever_black, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:17:50 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'cephadm'
Jan 30 23:17:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 30 23:17:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351390065' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 30 23:17:50 np0005603435 clever_black[75637]: 
Jan 30 23:17:50 np0005603435 clever_black[75637]: {
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "health": {
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "status": "HEALTH_OK",
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "checks": {},
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "mutes": []
Jan 30 23:17:50 np0005603435 clever_black[75637]:    },
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "election_epoch": 5,
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "quorum": [
Jan 30 23:17:50 np0005603435 clever_black[75637]:        0
Jan 30 23:17:50 np0005603435 clever_black[75637]:    ],
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "quorum_names": [
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "compute-0"
Jan 30 23:17:50 np0005603435 clever_black[75637]:    ],
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "quorum_age": 2,
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "monmap": {
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "epoch": 1,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "min_mon_release_name": "tentacle",
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_mons": 1
Jan 30 23:17:50 np0005603435 clever_black[75637]:    },
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "osdmap": {
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "epoch": 1,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_osds": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_up_osds": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "osd_up_since": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_in_osds": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "osd_in_since": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_remapped_pgs": 0
Jan 30 23:17:50 np0005603435 clever_black[75637]:    },
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "pgmap": {
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "pgs_by_state": [],
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_pgs": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_pools": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_objects": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "data_bytes": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "bytes_used": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "bytes_avail": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "bytes_total": 0
Jan 30 23:17:50 np0005603435 clever_black[75637]:    },
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "fsmap": {
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "epoch": 1,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "btime": "2026-01-31T04:17:45:671431+0000",
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "by_rank": [],
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "up:standby": 0
Jan 30 23:17:50 np0005603435 clever_black[75637]:    },
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "mgrmap": {
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "available": false,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "num_standbys": 0,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "modules": [
Jan 30 23:17:50 np0005603435 clever_black[75637]:            "iostat",
Jan 30 23:17:50 np0005603435 clever_black[75637]:            "nfs"
Jan 30 23:17:50 np0005603435 clever_black[75637]:        ],
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "services": {}
Jan 30 23:17:50 np0005603435 clever_black[75637]:    },
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "servicemap": {
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "epoch": 1,
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "modified": "2026-01-31T04:17:45.674142+0000",
Jan 30 23:17:50 np0005603435 clever_black[75637]:        "services": {}
Jan 30 23:17:50 np0005603435 clever_black[75637]:    },
Jan 30 23:17:50 np0005603435 clever_black[75637]:    "progress_events": {}
Jan 30 23:17:50 np0005603435 clever_black[75637]: }
Jan 30 23:17:50 np0005603435 systemd[1]: libpod-bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845.scope: Deactivated successfully.
Jan 30 23:17:50 np0005603435 podman[75600]: 2026-01-31 04:17:50.619402326 +0000 UTC m=+0.418272480 container died bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845 (image=quay.io/ceph/ceph:v20, name=clever_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:17:50 np0005603435 systemd[1]: var-lib-containers-storage-overlay-751ecd38d625f10da5007c92b38ebda578a75b68633b8800ab7d4ce9c5cc4e14-merged.mount: Deactivated successfully.
Jan 30 23:17:50 np0005603435 podman[75600]: 2026-01-31 04:17:50.673125231 +0000 UTC m=+0.471995385 container remove bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845 (image=quay.io/ceph/ceph:v20, name=clever_black, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:17:50 np0005603435 systemd[1]: libpod-conmon-bd4a07c9584fdd73f7da0c82b22c336f14ad715c3919c0ecad0f7c16c7fd9845.scope: Deactivated successfully.
Jan 30 23:17:51 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'crash'
Jan 30 23:17:51 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'dashboard'
Jan 30 23:17:51 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'devicehealth'
Jan 30 23:17:51 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'diskprediction_local'
Jan 30 23:17:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 30 23:17:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 30 23:17:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]:  from numpy import show_config as show_numpy_config
Jan 30 23:17:52 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'influx'
Jan 30 23:17:52 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'insights'
Jan 30 23:17:52 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'iostat'
Jan 30 23:17:52 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'k8sevents'
Jan 30 23:17:52 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'localpool'
Jan 30 23:17:52 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'mds_autoscaler'
Jan 30 23:17:52 np0005603435 podman[75687]: 2026-01-31 04:17:52.747655304 +0000 UTC m=+0.053869269 container create f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064 (image=quay.io/ceph/ceph:v20, name=ecstatic_johnson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:17:52 np0005603435 systemd[1]: Started libpod-conmon-f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064.scope.
Jan 30 23:17:52 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e57f1c6ce8d95970504d9dd95134b58137f009464dc9a2a5abf14db86092ea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e57f1c6ce8d95970504d9dd95134b58137f009464dc9a2a5abf14db86092ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e57f1c6ce8d95970504d9dd95134b58137f009464dc9a2a5abf14db86092ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:52 np0005603435 podman[75687]: 2026-01-31 04:17:52.803920602 +0000 UTC m=+0.110134587 container init f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064 (image=quay.io/ceph/ceph:v20, name=ecstatic_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:17:52 np0005603435 podman[75687]: 2026-01-31 04:17:52.807061419 +0000 UTC m=+0.113275394 container start f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064 (image=quay.io/ceph/ceph:v20, name=ecstatic_johnson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:17:52 np0005603435 podman[75687]: 2026-01-31 04:17:52.712891513 +0000 UTC m=+0.019105568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:52 np0005603435 podman[75687]: 2026-01-31 04:17:52.810190115 +0000 UTC m=+0.116404080 container attach f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064 (image=quay.io/ceph/ceph:v20, name=ecstatic_johnson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:52 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'mirroring'
Jan 30 23:17:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 30 23:17:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554011954' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]: 
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]: {
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "health": {
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "status": "HEALTH_OK",
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "checks": {},
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "mutes": []
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    },
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "election_epoch": 5,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "quorum": [
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        0
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    ],
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "quorum_names": [
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "compute-0"
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    ],
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "quorum_age": 4,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "monmap": {
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "epoch": 1,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "min_mon_release_name": "tentacle",
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_mons": 1
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    },
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "osdmap": {
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "epoch": 1,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_osds": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_up_osds": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "osd_up_since": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_in_osds": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "osd_in_since": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_remapped_pgs": 0
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    },
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "pgmap": {
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "pgs_by_state": [],
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_pgs": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_pools": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_objects": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "data_bytes": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "bytes_used": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "bytes_avail": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "bytes_total": 0
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    },
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "fsmap": {
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "epoch": 1,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "btime": "2026-01-31T04:17:45:671431+0000",
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "by_rank": [],
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "up:standby": 0
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    },
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "mgrmap": {
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "available": false,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "num_standbys": 0,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "modules": [
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:            "iostat",
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:            "nfs"
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        ],
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "services": {}
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    },
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "servicemap": {
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "epoch": 1,
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "modified": "2026-01-31T04:17:45.674142+0000",
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:        "services": {}
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    },
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]:    "progress_events": {}
Jan 30 23:17:52 np0005603435 ecstatic_johnson[75703]: }
Jan 30 23:17:52 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'nfs'
Jan 30 23:17:52 np0005603435 systemd[1]: libpod-f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064.scope: Deactivated successfully.
Jan 30 23:17:52 np0005603435 podman[75687]: 2026-01-31 04:17:52.97950366 +0000 UTC m=+0.285717655 container died f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064 (image=quay.io/ceph/ceph:v20, name=ecstatic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:17:53 np0005603435 systemd[1]: var-lib-containers-storage-overlay-31e57f1c6ce8d95970504d9dd95134b58137f009464dc9a2a5abf14db86092ea-merged.mount: Deactivated successfully.
Jan 30 23:17:53 np0005603435 podman[75687]: 2026-01-31 04:17:53.024206314 +0000 UTC m=+0.330420279 container remove f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064 (image=quay.io/ceph/ceph:v20, name=ecstatic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:17:53 np0005603435 systemd[1]: libpod-conmon-f85200228b87ccebdb2534412984b4ac767117529c03d417ecb2b340bace3064.scope: Deactivated successfully.
Jan 30 23:17:53 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'orchestrator'
Jan 30 23:17:53 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'osd_perf_query'
Jan 30 23:17:53 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'osd_support'
Jan 30 23:17:53 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'pg_autoscaler'
Jan 30 23:17:53 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'progress'
Jan 30 23:17:53 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'prometheus'
Jan 30 23:17:54 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'rbd_support'
Jan 30 23:17:54 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'rgw'
Jan 30 23:17:54 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'rook'
Jan 30 23:17:54 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'selftest'
Jan 30 23:17:54 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'smb'
Jan 30 23:17:55 np0005603435 podman[75745]: 2026-01-31 04:17:55.097166259 +0000 UTC m=+0.050286962 container create 7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0 (image=quay.io/ceph/ceph:v20, name=jovial_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:17:55 np0005603435 systemd[1]: Started libpod-conmon-7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0.scope.
Jan 30 23:17:55 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:55 np0005603435 podman[75745]: 2026-01-31 04:17:55.072010833 +0000 UTC m=+0.025131586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f530b22759fc25d0521f180ebe61cf7521cc7540781a494552f3f9fbde7fec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f530b22759fc25d0521f180ebe61cf7521cc7540781a494552f3f9fbde7fec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f530b22759fc25d0521f180ebe61cf7521cc7540781a494552f3f9fbde7fec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:55 np0005603435 podman[75745]: 2026-01-31 04:17:55.187371337 +0000 UTC m=+0.140492060 container init 7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0 (image=quay.io/ceph/ceph:v20, name=jovial_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:17:55 np0005603435 podman[75745]: 2026-01-31 04:17:55.192567354 +0000 UTC m=+0.145688057 container start 7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0 (image=quay.io/ceph/ceph:v20, name=jovial_shannon, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:17:55 np0005603435 podman[75745]: 2026-01-31 04:17:55.196121841 +0000 UTC m=+0.149242544 container attach 7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0 (image=quay.io/ceph/ceph:v20, name=jovial_shannon, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:17:55 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'snap_schedule'
Jan 30 23:17:55 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'stats'
Jan 30 23:17:55 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'status'
Jan 30 23:17:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 30 23:17:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2699830661' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]: 
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]: {
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "health": {
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "status": "HEALTH_OK",
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "checks": {},
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "mutes": []
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    },
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "election_epoch": 5,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "quorum": [
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        0
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    ],
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "quorum_names": [
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "compute-0"
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    ],
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "quorum_age": 6,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "monmap": {
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "epoch": 1,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "min_mon_release_name": "tentacle",
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_mons": 1
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    },
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "osdmap": {
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "epoch": 1,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_osds": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_up_osds": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "osd_up_since": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_in_osds": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "osd_in_since": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_remapped_pgs": 0
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    },
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "pgmap": {
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "pgs_by_state": [],
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_pgs": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_pools": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_objects": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "data_bytes": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "bytes_used": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "bytes_avail": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "bytes_total": 0
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    },
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "fsmap": {
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "epoch": 1,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "btime": "2026-01-31T04:17:45:671431+0000",
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "by_rank": [],
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "up:standby": 0
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    },
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "mgrmap": {
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "available": false,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "num_standbys": 0,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "modules": [
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:            "iostat",
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:            "nfs"
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        ],
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "services": {}
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    },
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "servicemap": {
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "epoch": 1,
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "modified": "2026-01-31T04:17:45.674142+0000",
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:        "services": {}
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    },
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]:    "progress_events": {}
Jan 30 23:17:55 np0005603435 jovial_shannon[75762]: }
Jan 30 23:17:55 np0005603435 systemd[1]: libpod-7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0.scope: Deactivated successfully.
Jan 30 23:17:55 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'telegraf'
Jan 30 23:17:55 np0005603435 podman[75788]: 2026-01-31 04:17:55.438330221 +0000 UTC m=+0.031125423 container died 7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0 (image=quay.io/ceph/ceph:v20, name=jovial_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:17:55 np0005603435 systemd[1]: var-lib-containers-storage-overlay-59f530b22759fc25d0521f180ebe61cf7521cc7540781a494552f3f9fbde7fec-merged.mount: Deactivated successfully.
Jan 30 23:17:55 np0005603435 podman[75788]: 2026-01-31 04:17:55.479465008 +0000 UTC m=+0.072260200 container remove 7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0 (image=quay.io/ceph/ceph:v20, name=jovial_shannon, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:17:55 np0005603435 systemd[1]: libpod-conmon-7ec4a67bea22e78a4c430ebd38632f08b21c66aaa5518a7848b5abc73b448fa0.scope: Deactivated successfully.
Jan 30 23:17:55 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'telemetry'
Jan 30 23:17:55 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'test_orchestrator'
Jan 30 23:17:55 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'volumes'
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: ms_deliver_dispatch: unhandled message 0x5563143f1860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wyngmr
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr handle_mgr_map Activating!
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr handle_mgr_map I am now activating
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.wyngmr(active, starting, since 0.0147059s)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mds metadata"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e1 all = 1
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mon metadata"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wyngmr", "id": "compute-0.wyngmr"} v 0)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mgr metadata", "who": "compute-0.wyngmr", "id": "compute-0.wyngmr"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: balancer
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: crash
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [balancer INFO root] Starting
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: devicehealth
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:17:56
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [balancer INFO root] No pools available
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Manager daemon compute-0.wyngmr is now available
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: iostat
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] Starting
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: nfs
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: orchestrator
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: pg_autoscaler
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: progress
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [progress INFO root] Loading...
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [progress INFO root] No stored events to load
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [progress INFO root] Loaded [] historic events
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [progress INFO root] Loaded OSDMap, ready.
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] recovery thread starting
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] starting setup
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: rbd_support
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: status
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: telemetry
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/mirror_snapshot_schedule"} v 0)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/mirror_snapshot_schedule"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] PerfHandler: starting
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TaskHandler: starting
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/trash_purge_schedule"} v 0)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/trash_purge_schedule"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' 
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] setup complete
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' 
Jan 30 23:17:56 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: volumes
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' 
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: Activating manager daemon compute-0.wyngmr
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: Manager daemon compute-0.wyngmr is now available
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/mirror_snapshot_schedule"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/trash_purge_schedule"} : dispatch
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' 
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' 
Jan 30 23:17:56 np0005603435 ceph-mon[75307]: from='mgr.14102 192.168.122.100:0/2176280721' entity='mgr.compute-0.wyngmr' 
Jan 30 23:17:57 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.wyngmr(active, since 1.0219s)
Jan 30 23:17:57 np0005603435 podman[75881]: 2026-01-31 04:17:57.544292144 +0000 UTC m=+0.038942804 container create 5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6 (image=quay.io/ceph/ceph:v20, name=nostalgic_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:57 np0005603435 systemd[1]: Started libpod-conmon-5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6.scope.
Jan 30 23:17:57 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b94552ff21b50afd437caa4ec7376ddce4a9bfe9a343e871bdb518a41e7240/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b94552ff21b50afd437caa4ec7376ddce4a9bfe9a343e871bdb518a41e7240/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84b94552ff21b50afd437caa4ec7376ddce4a9bfe9a343e871bdb518a41e7240/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:57 np0005603435 podman[75881]: 2026-01-31 04:17:57.600778307 +0000 UTC m=+0.095428957 container init 5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6 (image=quay.io/ceph/ceph:v20, name=nostalgic_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:57 np0005603435 podman[75881]: 2026-01-31 04:17:57.604886807 +0000 UTC m=+0.099537457 container start 5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6 (image=quay.io/ceph/ceph:v20, name=nostalgic_davinci, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:17:57 np0005603435 podman[75881]: 2026-01-31 04:17:57.613798336 +0000 UTC m=+0.108449006 container attach 5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6 (image=quay.io/ceph/ceph:v20, name=nostalgic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 30 23:17:57 np0005603435 podman[75881]: 2026-01-31 04:17:57.528918508 +0000 UTC m=+0.023569188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 30 23:17:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857671786' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]: 
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]: {
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "health": {
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "status": "HEALTH_OK",
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "checks": {},
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "mutes": []
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    },
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "election_epoch": 5,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "quorum": [
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        0
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    ],
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "quorum_names": [
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "compute-0"
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    ],
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "quorum_age": 9,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "monmap": {
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "epoch": 1,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "min_mon_release_name": "tentacle",
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_mons": 1
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    },
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "osdmap": {
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "epoch": 1,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_osds": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_up_osds": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "osd_up_since": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_in_osds": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "osd_in_since": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_remapped_pgs": 0
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    },
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "pgmap": {
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "pgs_by_state": [],
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_pgs": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_pools": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_objects": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "data_bytes": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "bytes_used": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "bytes_avail": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "bytes_total": 0
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    },
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "fsmap": {
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "epoch": 1,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "btime": "2026-01-31T04:17:45:671431+0000",
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "by_rank": [],
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "up:standby": 0
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    },
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "mgrmap": {
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "available": true,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "num_standbys": 0,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "modules": [
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:            "iostat",
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:            "nfs"
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        ],
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "services": {}
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    },
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "servicemap": {
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "epoch": 1,
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "modified": "2026-01-31T04:17:45.674142+0000",
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:        "services": {}
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    },
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]:    "progress_events": {}
Jan 30 23:17:58 np0005603435 nostalgic_davinci[75897]: }
Jan 30 23:17:58 np0005603435 systemd[1]: libpod-5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6.scope: Deactivated successfully.
Jan 30 23:17:58 np0005603435 podman[75881]: 2026-01-31 04:17:58.069940652 +0000 UTC m=+0.564591302 container died 5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6 (image=quay.io/ceph/ceph:v20, name=nostalgic_davinci, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:17:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-84b94552ff21b50afd437caa4ec7376ddce4a9bfe9a343e871bdb518a41e7240-merged.mount: Deactivated successfully.
Jan 30 23:17:58 np0005603435 podman[75881]: 2026-01-31 04:17:58.105942613 +0000 UTC m=+0.600593263 container remove 5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6 (image=quay.io/ceph/ceph:v20, name=nostalgic_davinci, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:58 np0005603435 systemd[1]: libpod-conmon-5b55ee857e31ace51d94588c3e36c3a9c9958aed2f103c34792cbbd764fde3b6.scope: Deactivated successfully.
Jan 30 23:17:58 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:17:58 np0005603435 podman[75935]: 2026-01-31 04:17:58.158810947 +0000 UTC m=+0.037387956 container create f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5 (image=quay.io/ceph/ceph:v20, name=focused_roentgen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:17:58 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:17:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.wyngmr(active, since 2s)
Jan 30 23:17:58 np0005603435 systemd[1]: Started libpod-conmon-f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5.scope.
Jan 30 23:17:58 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d1bddacf62b0108961d6096143fabe414ca1847df93b30db5a3c860d613a4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d1bddacf62b0108961d6096143fabe414ca1847df93b30db5a3c860d613a4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d1bddacf62b0108961d6096143fabe414ca1847df93b30db5a3c860d613a4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d1bddacf62b0108961d6096143fabe414ca1847df93b30db5a3c860d613a4b/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:58 np0005603435 podman[75935]: 2026-01-31 04:17:58.241056461 +0000 UTC m=+0.119633490 container init f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5 (image=quay.io/ceph/ceph:v20, name=focused_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:58 np0005603435 podman[75935]: 2026-01-31 04:17:58.143537183 +0000 UTC m=+0.022114172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:58 np0005603435 podman[75935]: 2026-01-31 04:17:58.246486614 +0000 UTC m=+0.125063623 container start f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5 (image=quay.io/ceph/ceph:v20, name=focused_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:17:58 np0005603435 podman[75935]: 2026-01-31 04:17:58.249587039 +0000 UTC m=+0.128164048 container attach f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5 (image=quay.io/ceph/ceph:v20, name=focused_roentgen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:17:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 30 23:17:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3793960140' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 30 23:17:58 np0005603435 focused_roentgen[75951]: 
Jan 30 23:17:58 np0005603435 focused_roentgen[75951]: [global]
Jan 30 23:17:58 np0005603435 focused_roentgen[75951]: #011fsid = 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:17:58 np0005603435 focused_roentgen[75951]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 30 23:17:58 np0005603435 focused_roentgen[75951]: #011osd_crush_chooseleaf_type = 0
Jan 30 23:17:58 np0005603435 systemd[1]: libpod-f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5.scope: Deactivated successfully.
Jan 30 23:17:58 np0005603435 podman[75935]: 2026-01-31 04:17:58.677454253 +0000 UTC m=+0.556031252 container died f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5 (image=quay.io/ceph/ceph:v20, name=focused_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 30 23:17:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c0d1bddacf62b0108961d6096143fabe414ca1847df93b30db5a3c860d613a4b-merged.mount: Deactivated successfully.
Jan 30 23:17:58 np0005603435 podman[75935]: 2026-01-31 04:17:58.712460269 +0000 UTC m=+0.591037299 container remove f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5 (image=quay.io/ceph/ceph:v20, name=focused_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 30 23:17:58 np0005603435 systemd[1]: libpod-conmon-f807a16587599eb5c1c81552e0fad8000f13bb840f130b45370e0f48c105d0d5.scope: Deactivated successfully.
Jan 30 23:17:58 np0005603435 podman[75989]: 2026-01-31 04:17:58.79785589 +0000 UTC m=+0.048282033 container create 31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191 (image=quay.io/ceph/ceph:v20, name=gallant_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:17:58 np0005603435 systemd[1]: Started libpod-conmon-31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191.scope.
Jan 30 23:17:58 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:17:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5246b843cf2cc7d8b09f862b206034c41b2de4b08d157099fb720ebefff7e7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5246b843cf2cc7d8b09f862b206034c41b2de4b08d157099fb720ebefff7e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5246b843cf2cc7d8b09f862b206034c41b2de4b08d157099fb720ebefff7e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:17:58 np0005603435 podman[75989]: 2026-01-31 04:17:58.775686517 +0000 UTC m=+0.026112680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:17:58 np0005603435 podman[75989]: 2026-01-31 04:17:58.883505727 +0000 UTC m=+0.133931880 container init 31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191 (image=quay.io/ceph/ceph:v20, name=gallant_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:17:58 np0005603435 podman[75989]: 2026-01-31 04:17:58.890419726 +0000 UTC m=+0.140845839 container start 31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191 (image=quay.io/ceph/ceph:v20, name=gallant_jones, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:17:58 np0005603435 podman[75989]: 2026-01-31 04:17:58.893257355 +0000 UTC m=+0.143683518 container attach 31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191 (image=quay.io/ceph/ceph:v20, name=gallant_jones, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:17:59 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3793960140' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 30 23:17:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 30 23:17:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/187200260' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:00 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/187200260' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 30 23:18:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/187200260' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  1: '-n'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  2: 'mgr.compute-0.wyngmr'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  3: '-f'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  4: '--setuser'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  5: 'ceph'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  6: '--setgroup'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  7: 'ceph'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  8: '--default-log-to-file=false'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  9: '--default-log-to-journald=true'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr respawn  exe_path /proc/self/exe
Jan 30 23:18:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.wyngmr(active, since 4s)
Jan 30 23:18:00 np0005603435 systemd[1]: libpod-31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191.scope: Deactivated successfully.
Jan 30 23:18:00 np0005603435 podman[75989]: 2026-01-31 04:18:00.333466451 +0000 UTC m=+1.583892584 container died 31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191 (image=quay.io/ceph/ceph:v20, name=gallant_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:18:00 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: ignoring --setuser ceph since I am not root
Jan 30 23:18:00 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: ignoring --setgroup ceph since I am not root
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: pidfile_write: ignore empty --pid-file
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'alerts'
Jan 30 23:18:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ba5246b843cf2cc7d8b09f862b206034c41b2de4b08d157099fb720ebefff7e7-merged.mount: Deactivated successfully.
Jan 30 23:18:00 np0005603435 podman[75989]: 2026-01-31 04:18:00.474807751 +0000 UTC m=+1.725233904 container remove 31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191 (image=quay.io/ceph/ceph:v20, name=gallant_jones, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:00 np0005603435 systemd[1]: libpod-conmon-31e9a4a2599737346a5210041f702b0010a9d6679fb55b035d6fa0cb3d79d191.scope: Deactivated successfully.
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'balancer'
Jan 30 23:18:00 np0005603435 podman[76065]: 2026-01-31 04:18:00.545775039 +0000 UTC m=+0.053730957 container create 339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104 (image=quay.io/ceph/ceph:v20, name=awesome_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 30 23:18:00 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'cephadm'
Jan 30 23:18:00 np0005603435 systemd[1]: Started libpod-conmon-339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104.scope.
Jan 30 23:18:00 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3b61ed70458a209fc9ffb9abe1e86c8b29b2570f2dd52d6563288685845d3d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3b61ed70458a209fc9ffb9abe1e86c8b29b2570f2dd52d6563288685845d3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3b61ed70458a209fc9ffb9abe1e86c8b29b2570f2dd52d6563288685845d3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:00 np0005603435 podman[76065]: 2026-01-31 04:18:00.521911215 +0000 UTC m=+0.029867193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:00 np0005603435 podman[76065]: 2026-01-31 04:18:00.620556879 +0000 UTC m=+0.128512857 container init 339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104 (image=quay.io/ceph/ceph:v20, name=awesome_bartik, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:18:00 np0005603435 podman[76065]: 2026-01-31 04:18:00.626894144 +0000 UTC m=+0.134850062 container start 339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104 (image=quay.io/ceph/ceph:v20, name=awesome_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:18:00 np0005603435 podman[76065]: 2026-01-31 04:18:00.630963014 +0000 UTC m=+0.138918962 container attach 339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104 (image=quay.io/ceph/ceph:v20, name=awesome_bartik, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:18:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 30 23:18:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744444102' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 30 23:18:01 np0005603435 awesome_bartik[76081]: {
Jan 30 23:18:01 np0005603435 awesome_bartik[76081]:    "epoch": 5,
Jan 30 23:18:01 np0005603435 awesome_bartik[76081]:    "available": true,
Jan 30 23:18:01 np0005603435 awesome_bartik[76081]:    "active_name": "compute-0.wyngmr",
Jan 30 23:18:01 np0005603435 awesome_bartik[76081]:    "num_standby": 0
Jan 30 23:18:01 np0005603435 awesome_bartik[76081]: }
Jan 30 23:18:01 np0005603435 systemd[1]: libpod-339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104.scope: Deactivated successfully.
Jan 30 23:18:01 np0005603435 podman[76065]: 2026-01-31 04:18:01.164665649 +0000 UTC m=+0.672621537 container died 339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104 (image=quay.io/ceph/ceph:v20, name=awesome_bartik, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 30 23:18:01 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5a3b61ed70458a209fc9ffb9abe1e86c8b29b2570f2dd52d6563288685845d3d-merged.mount: Deactivated successfully.
Jan 30 23:18:01 np0005603435 podman[76065]: 2026-01-31 04:18:01.195412132 +0000 UTC m=+0.703368020 container remove 339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104 (image=quay.io/ceph/ceph:v20, name=awesome_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:01 np0005603435 systemd[1]: libpod-conmon-339be83f7ef5340665adacdadb1dd1a5514d685e5f3bf925a138304b4f043104.scope: Deactivated successfully.
Jan 30 23:18:01 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'crash'
Jan 30 23:18:01 np0005603435 podman[76130]: 2026-01-31 04:18:01.250206113 +0000 UTC m=+0.039510208 container create aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e (image=quay.io/ceph/ceph:v20, name=optimistic_keller, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:01 np0005603435 systemd[1]: Started libpod-conmon-aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e.scope.
Jan 30 23:18:01 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'dashboard'
Jan 30 23:18:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c94e14b70b921f171662988d239f4f32c414f3a59b8e0077b759d58b0a44c27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c94e14b70b921f171662988d239f4f32c414f3a59b8e0077b759d58b0a44c27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c94e14b70b921f171662988d239f4f32c414f3a59b8e0077b759d58b0a44c27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:01 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/187200260' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 30 23:18:01 np0005603435 podman[76130]: 2026-01-31 04:18:01.229342562 +0000 UTC m=+0.018646727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:01 np0005603435 podman[76130]: 2026-01-31 04:18:01.336905285 +0000 UTC m=+0.126209440 container init aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e (image=quay.io/ceph/ceph:v20, name=optimistic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 30 23:18:01 np0005603435 podman[76130]: 2026-01-31 04:18:01.341323944 +0000 UTC m=+0.130628029 container start aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e (image=quay.io/ceph/ceph:v20, name=optimistic_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:18:01 np0005603435 podman[76130]: 2026-01-31 04:18:01.343759123 +0000 UTC m=+0.133063298 container attach aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e (image=quay.io/ceph/ceph:v20, name=optimistic_keller, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 30 23:18:01 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'devicehealth'
Jan 30 23:18:02 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'diskprediction_local'
Jan 30 23:18:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 30 23:18:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 30 23:18:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]:  from numpy import show_config as show_numpy_config
Jan 30 23:18:02 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'influx'
Jan 30 23:18:02 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'insights'
Jan 30 23:18:02 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'iostat'
Jan 30 23:18:02 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'k8sevents'
Jan 30 23:18:02 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'localpool'
Jan 30 23:18:02 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'mds_autoscaler'
Jan 30 23:18:02 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'mirroring'
Jan 30 23:18:03 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'nfs'
Jan 30 23:18:03 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'orchestrator'
Jan 30 23:18:03 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'osd_perf_query'
Jan 30 23:18:03 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'osd_support'
Jan 30 23:18:03 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'pg_autoscaler'
Jan 30 23:18:03 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'progress'
Jan 30 23:18:03 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'prometheus'
Jan 30 23:18:04 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'rbd_support'
Jan 30 23:18:04 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'rgw'
Jan 30 23:18:04 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'rook'
Jan 30 23:18:04 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'selftest'
Jan 30 23:18:05 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'smb'
Jan 30 23:18:05 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'snap_schedule'
Jan 30 23:18:05 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'stats'
Jan 30 23:18:05 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'status'
Jan 30 23:18:05 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'telegraf'
Jan 30 23:18:05 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'telemetry'
Jan 30 23:18:05 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'test_orchestrator'
Jan 30 23:18:05 np0005603435 ceph-mgr[75599]: mgr[py] Loading python module 'volumes'
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wyngmr restarted
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: ms_deliver_dispatch: unhandled message 0x55c2d9176000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wyngmr
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr handle_mgr_map Activating!
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr handle_mgr_map I am now activating
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.wyngmr(active, starting, since 0.0139853s)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wyngmr", "id": "compute-0.wyngmr"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mgr metadata", "who": "compute-0.wyngmr", "id": "compute-0.wyngmr"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mds metadata"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e1 all = 1
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mon metadata"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: balancer
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Starting
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Manager daemon compute-0.wyngmr is now available
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:18:06
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] No pools available
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: Active manager daemon compute-0.wyngmr restarted
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: Activating manager daemon compute-0.wyngmr
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: Manager daemon compute-0.wyngmr is now available
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: cephadm
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: crash
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: devicehealth
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: iostat
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] Starting
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: nfs
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: orchestrator
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: pg_autoscaler
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: progress
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [progress INFO root] Loading...
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [progress INFO root] No stored events to load
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [progress INFO root] Loaded [] historic events
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [progress INFO root] Loaded OSDMap, ready.
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] recovery thread starting
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] starting setup
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: rbd_support
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: status
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: telemetry
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/mirror_snapshot_schedule"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/mirror_snapshot_schedule"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] PerfHandler: starting
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TaskHandler: starting
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/trash_purge_schedule"} v 0)
Jan 30 23:18:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/trash_purge_schedule"} : dispatch
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] setup complete
Jan 30 23:18:06 np0005603435 ceph-mgr[75599]: mgr load Constructed class from module: volumes
Jan 30 23:18:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.wyngmr(active, since 1.02471s)
Jan 30 23:18:07 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 30 23:18:07 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 30 23:18:07 np0005603435 optimistic_keller[76147]: {
Jan 30 23:18:07 np0005603435 optimistic_keller[76147]:    "mgrmap_epoch": 7,
Jan 30 23:18:07 np0005603435 optimistic_keller[76147]:    "initialized": true
Jan 30 23:18:07 np0005603435 optimistic_keller[76147]: }
Jan 30 23:18:07 np0005603435 systemd[1]: libpod-aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e.scope: Deactivated successfully.
Jan 30 23:18:07 np0005603435 podman[76130]: 2026-01-31 04:18:07.304582271 +0000 UTC m=+6.093886376 container died aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e (image=quay.io/ceph/ceph:v20, name=optimistic_keller, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 30 23:18:07 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5c94e14b70b921f171662988d239f4f32c414f3a59b8e0077b759d58b0a44c27-merged.mount: Deactivated successfully.
Jan 30 23:18:07 np0005603435 podman[76130]: 2026-01-31 04:18:07.339873164 +0000 UTC m=+6.129177289 container remove aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e (image=quay.io/ceph/ceph:v20, name=optimistic_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:07 np0005603435 systemd[1]: libpod-conmon-aca7448457d768f9a538ae0426d1db8b140c850ac9ec36042cb3558fd67cca9e.scope: Deactivated successfully.
Jan 30 23:18:07 np0005603435 podman[76293]: 2026-01-31 04:18:07.417163136 +0000 UTC m=+0.052327182 container create ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775 (image=quay.io/ceph/ceph:v20, name=focused_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:07 np0005603435 systemd[1]: Started libpod-conmon-ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775.scope.
Jan 30 23:18:07 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5099645ea1ca6a23dbc7bf35cae2ab0c6ae16e6a566b5635bc29f2a7ac139164/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5099645ea1ca6a23dbc7bf35cae2ab0c6ae16e6a566b5635bc29f2a7ac139164/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5099645ea1ca6a23dbc7bf35cae2ab0c6ae16e6a566b5635bc29f2a7ac139164/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:07 np0005603435 podman[76293]: 2026-01-31 04:18:07.402546039 +0000 UTC m=+0.037710095 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:07 np0005603435 podman[76293]: 2026-01-31 04:18:07.510777728 +0000 UTC m=+0.145941814 container init ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775 (image=quay.io/ceph/ceph:v20, name=focused_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:07 np0005603435 podman[76293]: 2026-01-31 04:18:07.515595186 +0000 UTC m=+0.150759252 container start ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775 (image=quay.io/ceph/ceph:v20, name=focused_greider, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:18:07 np0005603435 podman[76293]: 2026-01-31 04:18:07.522415623 +0000 UTC m=+0.157579689 container attach ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775 (image=quay.io/ceph/ceph:v20, name=focused_greider, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:18:07 np0005603435 ceph-mgr[75599]: [cephadm INFO cherrypy.error] [31/Jan/2026:04:18:07] ENGINE Bus STARTING
Jan 30 23:18:07 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : [31/Jan/2026:04:18:07] ENGINE Bus STARTING
Jan 30 23:18:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:07 np0005603435 ceph-mon[75307]: Found migration_current of "None". Setting to last migration.
Jan 30 23:18:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/mirror_snapshot_schedule"} : dispatch
Jan 30 23:18:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wyngmr/trash_purge_schedule"} : dispatch
Jan 30 23:18:07 np0005603435 ceph-mgr[75599]: [cephadm INFO cherrypy.error] [31/Jan/2026:04:18:07] ENGINE Serving on https://192.168.122.100:7150
Jan 30 23:18:07 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : [31/Jan/2026:04:18:07] ENGINE Serving on https://192.168.122.100:7150
Jan 30 23:18:07 np0005603435 ceph-mgr[75599]: [cephadm INFO cherrypy.error] [31/Jan/2026:04:18:07] ENGINE Client ('192.168.122.100', 43704) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 30 23:18:07 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : [31/Jan/2026:04:18:07] ENGINE Client ('192.168.122.100', 43704) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 30 23:18:08 np0005603435 ceph-mgr[75599]: [cephadm INFO cherrypy.error] [31/Jan/2026:04:18:08] ENGINE Serving on http://192.168.122.100:8765
Jan 30 23:18:08 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : [31/Jan/2026:04:18:08] ENGINE Serving on http://192.168.122.100:8765
Jan 30 23:18:08 np0005603435 ceph-mgr[75599]: [cephadm INFO cherrypy.error] [31/Jan/2026:04:18:08] ENGINE Bus STARTED
Jan 30 23:18:08 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : [31/Jan/2026:04:18:08] ENGINE Bus STARTED
Jan 30 23:18:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 30 23:18:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 30 23:18:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 30 23:18:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4044673370' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 30 23:18:08 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019900092 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:08 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4044673370' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 30 23:18:09 np0005603435 focused_greider[76309]: module 'orchestrator' is already enabled (always-on)
Jan 30 23:18:09 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.wyngmr(active, since 2s)
Jan 30 23:18:09 np0005603435 systemd[1]: libpod-ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775.scope: Deactivated successfully.
Jan 30 23:18:09 np0005603435 podman[76293]: 2026-01-31 04:18:09.091668147 +0000 UTC m=+1.726832223 container died ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775 (image=quay.io/ceph/ceph:v20, name=focused_greider, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:18:09 np0005603435 ceph-mon[75307]: [31/Jan/2026:04:18:07] ENGINE Bus STARTING
Jan 30 23:18:09 np0005603435 ceph-mon[75307]: [31/Jan/2026:04:18:07] ENGINE Serving on https://192.168.122.100:7150
Jan 30 23:18:09 np0005603435 ceph-mon[75307]: [31/Jan/2026:04:18:07] ENGINE Client ('192.168.122.100', 43704) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 30 23:18:09 np0005603435 ceph-mon[75307]: [31/Jan/2026:04:18:08] ENGINE Serving on http://192.168.122.100:8765
Jan 30 23:18:09 np0005603435 ceph-mon[75307]: [31/Jan/2026:04:18:08] ENGINE Bus STARTED
Jan 30 23:18:09 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/4044673370' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 30 23:18:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5099645ea1ca6a23dbc7bf35cae2ab0c6ae16e6a566b5635bc29f2a7ac139164-merged.mount: Deactivated successfully.
Jan 30 23:18:09 np0005603435 podman[76293]: 2026-01-31 04:18:09.724003226 +0000 UTC m=+2.359167262 container remove ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775 (image=quay.io/ceph/ceph:v20, name=focused_greider, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:09 np0005603435 systemd[1]: libpod-conmon-ccc043f3f79070c98c72ce541f65df77819fdbc77993dfe592f9b2d5b5a2e775.scope: Deactivated successfully.
Jan 30 23:18:09 np0005603435 podman[76372]: 2026-01-31 04:18:09.811038767 +0000 UTC m=+0.067488823 container create 8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb (image=quay.io/ceph/ceph:v20, name=adoring_faraday, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:18:09 np0005603435 systemd[1]: Started libpod-conmon-8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb.scope.
Jan 30 23:18:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:09 np0005603435 podman[76372]: 2026-01-31 04:18:09.781458773 +0000 UTC m=+0.037908929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c67edcc4f81b1c625fa853a6d7ea1b01d20b129c6369c6419996f6cb2889fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c67edcc4f81b1c625fa853a6d7ea1b01d20b129c6369c6419996f6cb2889fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c67edcc4f81b1c625fa853a6d7ea1b01d20b129c6369c6419996f6cb2889fcf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:09 np0005603435 podman[76372]: 2026-01-31 04:18:09.923790467 +0000 UTC m=+0.180240543 container init 8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb (image=quay.io/ceph/ceph:v20, name=adoring_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:09 np0005603435 podman[76372]: 2026-01-31 04:18:09.928869911 +0000 UTC m=+0.185319967 container start 8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb (image=quay.io/ceph/ceph:v20, name=adoring_faraday, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:09 np0005603435 podman[76372]: 2026-01-31 04:18:09.939344518 +0000 UTC m=+0.195794594 container attach 8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb (image=quay.io/ceph/ceph:v20, name=adoring_faraday, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/4044673370' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 30 23:18:10 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:10 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 30 23:18:10 np0005603435 systemd[1]: libpod-8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb.scope: Deactivated successfully.
Jan 30 23:18:10 np0005603435 podman[76372]: 2026-01-31 04:18:10.398082337 +0000 UTC m=+0.654532393 container died 8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb (image=quay.io/ceph/ceph:v20, name=adoring_faraday, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1c67edcc4f81b1c625fa853a6d7ea1b01d20b129c6369c6419996f6cb2889fcf-merged.mount: Deactivated successfully.
Jan 30 23:18:10 np0005603435 podman[76372]: 2026-01-31 04:18:10.451661759 +0000 UTC m=+0.708111815 container remove 8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb (image=quay.io/ceph/ceph:v20, name=adoring_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:18:10 np0005603435 systemd[1]: libpod-conmon-8d0d258145b1bbaedaa87fcc67d45a450454f4822712a9396936ffbfd8f566bb.scope: Deactivated successfully.
Jan 30 23:18:10 np0005603435 podman[76427]: 2026-01-31 04:18:10.51991225 +0000 UTC m=+0.052028465 container create 30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b (image=quay.io/ceph/ceph:v20, name=elastic_mahavira, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:10 np0005603435 systemd[1]: Started libpod-conmon-30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b.scope.
Jan 30 23:18:10 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7af3f28d34c5b69f12de11d013a0a3c2a5eb8a63cf294a9fb47ba1b2189b17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7af3f28d34c5b69f12de11d013a0a3c2a5eb8a63cf294a9fb47ba1b2189b17/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7af3f28d34c5b69f12de11d013a0a3c2a5eb8a63cf294a9fb47ba1b2189b17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:10 np0005603435 podman[76427]: 2026-01-31 04:18:10.577955391 +0000 UTC m=+0.110071626 container init 30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b (image=quay.io/ceph/ceph:v20, name=elastic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:18:10 np0005603435 podman[76427]: 2026-01-31 04:18:10.582679676 +0000 UTC m=+0.114795891 container start 30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b (image=quay.io/ceph/ceph:v20, name=elastic_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:18:10 np0005603435 podman[76427]: 2026-01-31 04:18:10.586281994 +0000 UTC m=+0.118398209 container attach 30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b (image=quay.io/ceph/ceph:v20, name=elastic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:18:10 np0005603435 podman[76427]: 2026-01-31 04:18:10.490631253 +0000 UTC m=+0.022747488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:10 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:10 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:10 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Set ssh ssh_user
Jan 30 23:18:10 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 30 23:18:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Set ssh ssh_config
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 30 23:18:11 np0005603435 elastic_mahavira[76443]: ssh user set to ceph-admin. sudo will be used
Jan 30 23:18:11 np0005603435 systemd[1]: libpod-30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b.scope: Deactivated successfully.
Jan 30 23:18:11 np0005603435 podman[76427]: 2026-01-31 04:18:11.023338813 +0000 UTC m=+0.555455058 container died 30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b (image=quay.io/ceph/ceph:v20, name=elastic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:18:11 np0005603435 systemd[1]: var-lib-containers-storage-overlay-fe7af3f28d34c5b69f12de11d013a0a3c2a5eb8a63cf294a9fb47ba1b2189b17-merged.mount: Deactivated successfully.
Jan 30 23:18:11 np0005603435 podman[76427]: 2026-01-31 04:18:11.068819647 +0000 UTC m=+0.600935872 container remove 30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b (image=quay.io/ceph/ceph:v20, name=elastic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 30 23:18:11 np0005603435 systemd[1]: libpod-conmon-30c8cfd372edbb609aff812ad2a188d8ffefeda276c83a19c1ad368eb497198b.scope: Deactivated successfully.
Jan 30 23:18:11 np0005603435 podman[76482]: 2026-01-31 04:18:11.135577621 +0000 UTC m=+0.049329988 container create d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e (image=quay.io/ceph/ceph:v20, name=friendly_lichterman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:11 np0005603435 systemd[1]: Started libpod-conmon-d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e.scope.
Jan 30 23:18:11 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267f5847ce5c50838c6c5c7c929e57ab87756b9328ad453f28fa3f6ee18650e7/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267f5847ce5c50838c6c5c7c929e57ab87756b9328ad453f28fa3f6ee18650e7/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267f5847ce5c50838c6c5c7c929e57ab87756b9328ad453f28fa3f6ee18650e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267f5847ce5c50838c6c5c7c929e57ab87756b9328ad453f28fa3f6ee18650e7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267f5847ce5c50838c6c5c7c929e57ab87756b9328ad453f28fa3f6ee18650e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 podman[76482]: 2026-01-31 04:18:11.108603891 +0000 UTC m=+0.022356318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:11 np0005603435 podman[76482]: 2026-01-31 04:18:11.234258307 +0000 UTC m=+0.148010694 container init d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e (image=quay.io/ceph/ceph:v20, name=friendly_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:18:11 np0005603435 podman[76482]: 2026-01-31 04:18:11.238744617 +0000 UTC m=+0.152496954 container start d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e (image=quay.io/ceph/ceph:v20, name=friendly_lichterman, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:11 np0005603435 podman[76482]: 2026-01-31 04:18:11.242697213 +0000 UTC m=+0.156449550 container attach d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e (image=quay.io/ceph/ceph:v20, name=friendly_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 30 23:18:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Set ssh private key
Jan 30 23:18:11 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 30 23:18:11 np0005603435 systemd[1]: libpod-d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e.scope: Deactivated successfully.
Jan 30 23:18:11 np0005603435 podman[76525]: 2026-01-31 04:18:11.72992021 +0000 UTC m=+0.034834633 container died d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e (image=quay.io/ceph/ceph:v20, name=friendly_lichterman, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:18:11 np0005603435 systemd[1]: var-lib-containers-storage-overlay-267f5847ce5c50838c6c5c7c929e57ab87756b9328ad453f28fa3f6ee18650e7-merged.mount: Deactivated successfully.
Jan 30 23:18:11 np0005603435 podman[76525]: 2026-01-31 04:18:11.762750054 +0000 UTC m=+0.067664397 container remove d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e (image=quay.io/ceph/ceph:v20, name=friendly_lichterman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 30 23:18:11 np0005603435 systemd[1]: libpod-conmon-d64279e530c17c1f9ec0972f60a2a820f8f59879a48e220726f972e80af2b14e.scope: Deactivated successfully.
Jan 30 23:18:11 np0005603435 podman[76540]: 2026-01-31 04:18:11.838591611 +0000 UTC m=+0.053648745 container create 823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0 (image=quay.io/ceph/ceph:v20, name=cool_jennings, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:18:11 np0005603435 systemd[1]: Started libpod-conmon-823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0.scope.
Jan 30 23:18:11 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045e02fc4f8e0b83ec2ee175c2d2ac43c338ac1e4f51a439449fe0926ee91454/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045e02fc4f8e0b83ec2ee175c2d2ac43c338ac1e4f51a439449fe0926ee91454/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045e02fc4f8e0b83ec2ee175c2d2ac43c338ac1e4f51a439449fe0926ee91454/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045e02fc4f8e0b83ec2ee175c2d2ac43c338ac1e4f51a439449fe0926ee91454/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045e02fc4f8e0b83ec2ee175c2d2ac43c338ac1e4f51a439449fe0926ee91454/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:11 np0005603435 podman[76540]: 2026-01-31 04:18:11.815942906 +0000 UTC m=+0.031000070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:11 np0005603435 podman[76540]: 2026-01-31 04:18:11.912801547 +0000 UTC m=+0.127858681 container init 823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0 (image=quay.io/ceph/ceph:v20, name=cool_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:18:11 np0005603435 podman[76540]: 2026-01-31 04:18:11.918328873 +0000 UTC m=+0.133386007 container start 823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0 (image=quay.io/ceph/ceph:v20, name=cool_jennings, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:11 np0005603435 podman[76540]: 2026-01-31 04:18:11.920872435 +0000 UTC m=+0.135929569 container attach 823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0 (image=quay.io/ceph/ceph:v20, name=cool_jennings, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:12 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:12 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 30 23:18:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:12 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 30 23:18:12 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 30 23:18:12 np0005603435 systemd[1]: libpod-823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0.scope: Deactivated successfully.
Jan 30 23:18:12 np0005603435 podman[76540]: 2026-01-31 04:18:12.304896206 +0000 UTC m=+0.519953370 container died 823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0 (image=quay.io/ceph/ceph:v20, name=cool_jennings, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:12 np0005603435 systemd[1]: var-lib-containers-storage-overlay-045e02fc4f8e0b83ec2ee175c2d2ac43c338ac1e4f51a439449fe0926ee91454-merged.mount: Deactivated successfully.
Jan 30 23:18:12 np0005603435 podman[76540]: 2026-01-31 04:18:12.342343782 +0000 UTC m=+0.557400906 container remove 823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0 (image=quay.io/ceph/ceph:v20, name=cool_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:18:12 np0005603435 systemd[1]: libpod-conmon-823f32da1c8d5021d048b0372a49a2b1bc8c7e94892f702226e3e387962747e0.scope: Deactivated successfully.
Jan 30 23:18:12 np0005603435 ceph-mon[75307]: Set ssh ssh_user
Jan 30 23:18:12 np0005603435 ceph-mon[75307]: Set ssh ssh_config
Jan 30 23:18:12 np0005603435 ceph-mon[75307]: ssh user set to ceph-admin. sudo will be used
Jan 30 23:18:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:12 np0005603435 podman[76594]: 2026-01-31 04:18:12.412717915 +0000 UTC m=+0.053850609 container create 395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d (image=quay.io/ceph/ceph:v20, name=beautiful_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 30 23:18:12 np0005603435 systemd[1]: Started libpod-conmon-395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d.scope.
Jan 30 23:18:12 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7462a25834cd1d1c21c2897f9d250410e8801f0b581c5d7fb28f7ea71ce13da/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7462a25834cd1d1c21c2897f9d250410e8801f0b581c5d7fb28f7ea71ce13da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7462a25834cd1d1c21c2897f9d250410e8801f0b581c5d7fb28f7ea71ce13da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:12 np0005603435 podman[76594]: 2026-01-31 04:18:12.477759617 +0000 UTC m=+0.118892281 container init 395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d (image=quay.io/ceph/ceph:v20, name=beautiful_johnson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:12 np0005603435 podman[76594]: 2026-01-31 04:18:12.386726709 +0000 UTC m=+0.027859453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:12 np0005603435 podman[76594]: 2026-01-31 04:18:12.48441019 +0000 UTC m=+0.125542894 container start 395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d (image=quay.io/ceph/ceph:v20, name=beautiful_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:12 np0005603435 podman[76594]: 2026-01-31 04:18:12.487313781 +0000 UTC m=+0.128446445 container attach 395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d (image=quay.io/ceph/ceph:v20, name=beautiful_johnson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:12 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:12 np0005603435 beautiful_johnson[76611]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy9XJK2Q6N9c7ELUdoHKq+ADRPIiAt0cr4KmwMLmwfGHw6MFSa3q+uMwq+invzUt5cUaBwd12ZrcwzvQOsjdt7LBHO7kM5qjpGfDUZ6IQfGH9sDxWP9OiD+nujEpO2EH2kb7VS2T40zsrnCUSi6fwHxU9Ip1UMrSxACe/Cu3Ol7Ul1wy/DbKgQk+6m4cgJ1VGNYxNmrnajG8fH1M2Dw9pf+Jdd6oeve0JbNQEirfkPnSFTFdeyhUPataJDa9iI3AK07HS+LMllZkqWR6zZKpPRyr9ybW/BPwVQP8GQbEbAyv+hbmr1iZMhUrdnshPYyiEo+/JnzEetKqwY+1eHz6g/zOJJPc1cfrN0cfKc2f7xzWHq+Waidlbg7MJQDLmhfbkhJt9t6lfEUNinLWWUxnSu3ZPmQ+zQZt/uuZKpktwoFRMIvQUCP7fg5gsGg4EDZz45EWw7tD48YCQMW6rkaXlSajaeCE5jKCtArrwOjWX3JeuZUSwPZ+6aVItHISyFvIs= zuul@controller
Jan 30 23:18:12 np0005603435 systemd[1]: libpod-395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d.scope: Deactivated successfully.
Jan 30 23:18:12 np0005603435 podman[76594]: 2026-01-31 04:18:12.845643722 +0000 UTC m=+0.486776426 container died 395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d (image=quay.io/ceph/ceph:v20, name=beautiful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:18:12 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b7462a25834cd1d1c21c2897f9d250410e8801f0b581c5d7fb28f7ea71ce13da-merged.mount: Deactivated successfully.
Jan 30 23:18:12 np0005603435 podman[76594]: 2026-01-31 04:18:12.887909377 +0000 UTC m=+0.529042041 container remove 395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d (image=quay.io/ceph/ceph:v20, name=beautiful_johnson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:12 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:12 np0005603435 systemd[1]: libpod-conmon-395abfbce81f17505958788da3769cd035a629fb7289049ee367a252dae66d9d.scope: Deactivated successfully.
Jan 30 23:18:12 np0005603435 podman[76649]: 2026-01-31 04:18:12.955199204 +0000 UTC m=+0.049089423 container create 96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a (image=quay.io/ceph/ceph:v20, name=elated_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:18:12 np0005603435 systemd[1]: Started libpod-conmon-96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a.scope.
Jan 30 23:18:13 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78982ae002697d39e8d628347ded0cd533411296d3d2825a0b2c3ea32a88072/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78982ae002697d39e8d628347ded0cd533411296d3d2825a0b2c3ea32a88072/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78982ae002697d39e8d628347ded0cd533411296d3d2825a0b2c3ea32a88072/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:13 np0005603435 podman[76649]: 2026-01-31 04:18:12.927106916 +0000 UTC m=+0.020997185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:13 np0005603435 podman[76649]: 2026-01-31 04:18:13.038935904 +0000 UTC m=+0.132826123 container init 96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a (image=quay.io/ceph/ceph:v20, name=elated_keldysh, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:13 np0005603435 podman[76649]: 2026-01-31 04:18:13.046263643 +0000 UTC m=+0.140153832 container start 96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a (image=quay.io/ceph/ceph:v20, name=elated_keldysh, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:18:13 np0005603435 podman[76649]: 2026-01-31 04:18:13.049740188 +0000 UTC m=+0.143630417 container attach 96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a (image=quay.io/ceph/ceph:v20, name=elated_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:18:13 np0005603435 ceph-mon[75307]: Set ssh ssh_identity_key
Jan 30 23:18:13 np0005603435 ceph-mon[75307]: Set ssh private key
Jan 30 23:18:13 np0005603435 ceph-mon[75307]: Set ssh ssh_identity_pub
Jan 30 23:18:13 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052542 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:13 np0005603435 systemd[1]: Created slice User Slice of UID 42477.
Jan 30 23:18:13 np0005603435 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 30 23:18:13 np0005603435 systemd-logind[816]: New session 21 of user ceph-admin.
Jan 30 23:18:13 np0005603435 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 30 23:18:13 np0005603435 systemd[1]: Starting User Manager for UID 42477...
Jan 30 23:18:13 np0005603435 systemd-logind[816]: New session 23 of user ceph-admin.
Jan 30 23:18:13 np0005603435 systemd[76696]: Queued start job for default target Main User Target.
Jan 30 23:18:13 np0005603435 systemd[76696]: Created slice User Application Slice.
Jan 30 23:18:13 np0005603435 systemd[76696]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 30 23:18:13 np0005603435 systemd[76696]: Started Daily Cleanup of User's Temporary Directories.
Jan 30 23:18:13 np0005603435 systemd[76696]: Reached target Paths.
Jan 30 23:18:13 np0005603435 systemd[76696]: Reached target Timers.
Jan 30 23:18:13 np0005603435 systemd[76696]: Starting D-Bus User Message Bus Socket...
Jan 30 23:18:13 np0005603435 systemd[76696]: Starting Create User's Volatile Files and Directories...
Jan 30 23:18:13 np0005603435 systemd[76696]: Listening on D-Bus User Message Bus Socket.
Jan 30 23:18:13 np0005603435 systemd[76696]: Reached target Sockets.
Jan 30 23:18:14 np0005603435 systemd[76696]: Finished Create User's Volatile Files and Directories.
Jan 30 23:18:14 np0005603435 systemd[76696]: Reached target Basic System.
Jan 30 23:18:14 np0005603435 systemd[76696]: Reached target Main User Target.
Jan 30 23:18:14 np0005603435 systemd[76696]: Startup finished in 180ms.
Jan 30 23:18:14 np0005603435 systemd[1]: Started User Manager for UID 42477.
Jan 30 23:18:14 np0005603435 systemd[1]: Started Session 21 of User ceph-admin.
Jan 30 23:18:14 np0005603435 systemd[1]: Started Session 23 of User ceph-admin.
Jan 30 23:18:14 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:14 np0005603435 systemd-logind[816]: New session 24 of user ceph-admin.
Jan 30 23:18:14 np0005603435 systemd[1]: Started Session 24 of User ceph-admin.
Jan 30 23:18:14 np0005603435 systemd-logind[816]: New session 25 of user ceph-admin.
Jan 30 23:18:14 np0005603435 systemd[1]: Started Session 25 of User ceph-admin.
Jan 30 23:18:14 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 30 23:18:14 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 30 23:18:14 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:15 np0005603435 systemd-logind[816]: New session 26 of user ceph-admin.
Jan 30 23:18:15 np0005603435 systemd[1]: Started Session 26 of User ceph-admin.
Jan 30 23:18:15 np0005603435 ceph-mon[75307]: Deploying cephadm binary to compute-0
Jan 30 23:18:15 np0005603435 systemd-logind[816]: New session 27 of user ceph-admin.
Jan 30 23:18:15 np0005603435 systemd[1]: Started Session 27 of User ceph-admin.
Jan 30 23:18:15 np0005603435 systemd-logind[816]: New session 28 of user ceph-admin.
Jan 30 23:18:15 np0005603435 systemd[1]: Started Session 28 of User ceph-admin.
Jan 30 23:18:16 np0005603435 systemd-logind[816]: New session 29 of user ceph-admin.
Jan 30 23:18:16 np0005603435 systemd[1]: Started Session 29 of User ceph-admin.
Jan 30 23:18:16 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:16 np0005603435 systemd-logind[816]: New session 30 of user ceph-admin.
Jan 30 23:18:16 np0005603435 systemd[1]: Started Session 30 of User ceph-admin.
Jan 30 23:18:16 np0005603435 systemd-logind[816]: New session 31 of user ceph-admin.
Jan 30 23:18:16 np0005603435 systemd[1]: Started Session 31 of User ceph-admin.
Jan 30 23:18:16 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:18 np0005603435 systemd-logind[816]: New session 32 of user ceph-admin.
Jan 30 23:18:18 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:18 np0005603435 systemd[1]: Started Session 32 of User ceph-admin.
Jan 30 23:18:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054700 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:18 np0005603435 systemd-logind[816]: New session 33 of user ceph-admin.
Jan 30 23:18:18 np0005603435 systemd[1]: Started Session 33 of User ceph-admin.
Jan 30 23:18:18 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 30 23:18:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:19 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Added host compute-0
Jan 30 23:18:19 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 30 23:18:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 30 23:18:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 30 23:18:19 np0005603435 elated_keldysh[76664]: Added host 'compute-0' with addr '192.168.122.100'
Jan 30 23:18:19 np0005603435 systemd[1]: libpod-96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a.scope: Deactivated successfully.
Jan 30 23:18:19 np0005603435 podman[76649]: 2026-01-31 04:18:19.080715604 +0000 UTC m=+6.174605823 container died 96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a (image=quay.io/ceph/ceph:v20, name=elated_keldysh, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a78982ae002697d39e8d628347ded0cd533411296d3d2825a0b2c3ea32a88072-merged.mount: Deactivated successfully.
Jan 30 23:18:19 np0005603435 podman[76649]: 2026-01-31 04:18:19.129810347 +0000 UTC m=+6.223700546 container remove 96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a (image=quay.io/ceph/ceph:v20, name=elated_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:19 np0005603435 systemd[1]: libpod-conmon-96cc8a4a6be6c1760dde155f514a2223a79985bf912ac2c83f1d6103f9362e4a.scope: Deactivated successfully.
Jan 30 23:18:19 np0005603435 podman[77087]: 2026-01-31 04:18:19.221883709 +0000 UTC m=+0.064741891 container create 46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525 (image=quay.io/ceph/ceph:v20, name=sleepy_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:19 np0005603435 systemd[1]: Started libpod-conmon-46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525.scope.
Jan 30 23:18:19 np0005603435 podman[77087]: 2026-01-31 04:18:19.195329076 +0000 UTC m=+0.038187338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:19 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc60e5004b8b2d643cc648d1ecfb71990ff431aab9a15ee0333c94b4552e10b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc60e5004b8b2d643cc648d1ecfb71990ff431aab9a15ee0333c94b4552e10b0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc60e5004b8b2d643cc648d1ecfb71990ff431aab9a15ee0333c94b4552e10b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:19 np0005603435 podman[77087]: 2026-01-31 04:18:19.329846515 +0000 UTC m=+0.172704727 container init 46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525 (image=quay.io/ceph/ceph:v20, name=sleepy_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:18:19 np0005603435 podman[77087]: 2026-01-31 04:18:19.336616344 +0000 UTC m=+0.179474526 container start 46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525 (image=quay.io/ceph/ceph:v20, name=sleepy_blackwell, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:19 np0005603435 podman[77087]: 2026-01-31 04:18:19.340506716 +0000 UTC m=+0.183364918 container attach 46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525 (image=quay.io/ceph/ceph:v20, name=sleepy_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:18:19 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:19 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 30 23:18:19 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 30 23:18:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 30 23:18:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:19 np0005603435 sleepy_blackwell[77127]: Scheduled mon update...
Jan 30 23:18:19 np0005603435 podman[77087]: 2026-01-31 04:18:19.7694282 +0000 UTC m=+0.612286372 container died 46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525 (image=quay.io/ceph/ceph:v20, name=sleepy_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:19 np0005603435 systemd[1]: libpod-46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525.scope: Deactivated successfully.
Jan 30 23:18:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-bc60e5004b8b2d643cc648d1ecfb71990ff431aab9a15ee0333c94b4552e10b0-merged.mount: Deactivated successfully.
Jan 30 23:18:19 np0005603435 podman[77087]: 2026-01-31 04:18:19.952601872 +0000 UTC m=+0.795460084 container remove 46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525 (image=quay.io/ceph/ceph:v20, name=sleepy_blackwell, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:19 np0005603435 systemd[1]: libpod-conmon-46653ec93befe39688b5547c6b0fb2e1413a2a1b392fe364a1e88eab72ad3525.scope: Deactivated successfully.
Jan 30 23:18:20 np0005603435 podman[77193]: 2026-01-31 04:18:20.039029822 +0000 UTC m=+0.064612969 container create 88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb (image=quay.io/ceph/ceph:v20, name=elastic_newton, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:18:20 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:20 np0005603435 ceph-mon[75307]: Added host compute-0
Jan 30 23:18:20 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:20 np0005603435 systemd[1]: Started libpod-conmon-88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb.scope.
Jan 30 23:18:20 np0005603435 podman[77193]: 2026-01-31 04:18:20.005492394 +0000 UTC m=+0.031075561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32da2db96e932ce1e9199314d661038152aebdb1fe63a753a8085d0980cc9ac7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32da2db96e932ce1e9199314d661038152aebdb1fe63a753a8085d0980cc9ac7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32da2db96e932ce1e9199314d661038152aebdb1fe63a753a8085d0980cc9ac7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:20 np0005603435 podman[77193]: 2026-01-31 04:18:20.150020989 +0000 UTC m=+0.175604126 container init 88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb (image=quay.io/ceph/ceph:v20, name=elastic_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:20 np0005603435 podman[77193]: 2026-01-31 04:18:20.1577328 +0000 UTC m=+0.183315937 container start 88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb (image=quay.io/ceph/ceph:v20, name=elastic_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:20 np0005603435 podman[77193]: 2026-01-31 04:18:20.160780671 +0000 UTC m=+0.186363808 container attach 88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb (image=quay.io/ceph/ceph:v20, name=elastic_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:18:20 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:20 np0005603435 podman[77146]: 2026-01-31 04:18:20.449936693 +0000 UTC m=+1.022391084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:20 np0005603435 podman[77248]: 2026-01-31 04:18:20.600477509 +0000 UTC m=+0.060695747 container create be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5 (image=quay.io/ceph/ceph:v20, name=competent_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:18:20 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:20 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 30 23:18:20 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 30 23:18:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 30 23:18:20 np0005603435 systemd[1]: Started libpod-conmon-be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5.scope.
Jan 30 23:18:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:20 np0005603435 elastic_newton[77209]: Scheduled mgr update...
Jan 30 23:18:20 np0005603435 podman[77193]: 2026-01-31 04:18:20.659343522 +0000 UTC m=+0.684926679 container died 88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb (image=quay.io/ceph/ceph:v20, name=elastic_newton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:20 np0005603435 podman[77248]: 2026-01-31 04:18:20.573341812 +0000 UTC m=+0.033560060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:20 np0005603435 systemd[1]: libpod-88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb.scope: Deactivated successfully.
Jan 30 23:18:20 np0005603435 podman[77248]: 2026-01-31 04:18:20.687135874 +0000 UTC m=+0.147354092 container init be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5 (image=quay.io/ceph/ceph:v20, name=competent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay-32da2db96e932ce1e9199314d661038152aebdb1fe63a753a8085d0980cc9ac7-merged.mount: Deactivated successfully.
Jan 30 23:18:20 np0005603435 podman[77248]: 2026-01-31 04:18:20.695967932 +0000 UTC m=+0.156186130 container start be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5 (image=quay.io/ceph/ceph:v20, name=competent_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:18:20 np0005603435 podman[77248]: 2026-01-31 04:18:20.702997497 +0000 UTC m=+0.163215725 container attach be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5 (image=quay.io/ceph/ceph:v20, name=competent_lederberg, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:18:20 np0005603435 podman[77193]: 2026-01-31 04:18:20.712573262 +0000 UTC m=+0.738156429 container remove 88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb (image=quay.io/ceph/ceph:v20, name=elastic_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:20 np0005603435 systemd[1]: libpod-conmon-88e5e0a64f0fc8bc80c30c8e2f83ce5ed06d197168cba2a8104a513c7d64eeeb.scope: Deactivated successfully.
Jan 30 23:18:20 np0005603435 competent_lederberg[77265]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 30 23:18:20 np0005603435 systemd[1]: libpod-be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5.scope: Deactivated successfully.
Jan 30 23:18:20 np0005603435 podman[77248]: 2026-01-31 04:18:20.779147936 +0000 UTC m=+0.239366204 container died be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5 (image=quay.io/ceph/ceph:v20, name=competent_lederberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c98c00bd93ffb81145cb31a8d0148399b9cd905cdd9ba66e3d76bab152c1629c-merged.mount: Deactivated successfully.
Jan 30 23:18:20 np0005603435 podman[77282]: 2026-01-31 04:18:20.811500305 +0000 UTC m=+0.075724259 container create c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0 (image=quay.io/ceph/ceph:v20, name=determined_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:18:20 np0005603435 systemd[1]: Started libpod-conmon-c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0.scope.
Jan 30 23:18:20 np0005603435 podman[77248]: 2026-01-31 04:18:20.845993026 +0000 UTC m=+0.306211254 container remove be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5 (image=quay.io/ceph/ceph:v20, name=competent_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:20 np0005603435 systemd[1]: libpod-conmon-be27e186685992227e6b1f98ab4a1fdb88a24daff5dc90c756cb62212c57e8e5.scope: Deactivated successfully.
Jan 30 23:18:20 np0005603435 podman[77282]: 2026-01-31 04:18:20.775715255 +0000 UTC m=+0.039939289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ca8993e4b09e9d14d6365a42f24077b60bfe663079e87c29a5d499b552829a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ca8993e4b09e9d14d6365a42f24077b60bfe663079e87c29a5d499b552829a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ca8993e4b09e9d14d6365a42f24077b60bfe663079e87c29a5d499b552829a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:20 np0005603435 podman[77282]: 2026-01-31 04:18:20.882216056 +0000 UTC m=+0.146440030 container init c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0 (image=quay.io/ceph/ceph:v20, name=determined_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:18:20 np0005603435 podman[77282]: 2026-01-31 04:18:20.886917447 +0000 UTC m=+0.151141431 container start c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0 (image=quay.io/ceph/ceph:v20, name=determined_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:18:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 30 23:18:20 np0005603435 podman[77282]: 2026-01-31 04:18:20.891765451 +0000 UTC m=+0.155989435 container attach c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0 (image=quay.io/ceph/ceph:v20, name=determined_wu, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:20 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: Saving service mon spec with placement count:5
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:21 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:21 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service crash spec with placement *
Jan 30 23:18:21 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:21 np0005603435 determined_wu[77309]: Scheduled crash update...
Jan 30 23:18:21 np0005603435 systemd[1]: libpod-c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0.scope: Deactivated successfully.
Jan 30 23:18:21 np0005603435 podman[77282]: 2026-01-31 04:18:21.363855629 +0000 UTC m=+0.628079573 container died c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0 (image=quay.io/ceph/ceph:v20, name=determined_wu, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay-00ca8993e4b09e9d14d6365a42f24077b60bfe663079e87c29a5d499b552829a-merged.mount: Deactivated successfully.
Jan 30 23:18:21 np0005603435 podman[77282]: 2026-01-31 04:18:21.399316312 +0000 UTC m=+0.663540276 container remove c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0 (image=quay.io/ceph/ceph:v20, name=determined_wu, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:18:21 np0005603435 systemd[1]: libpod-conmon-c10035482f0fe0c4d8a9511d6b5a3ef98609e8f4fc35fd71a7bd6506f68848a0.scope: Deactivated successfully.
Jan 30 23:18:21 np0005603435 podman[77442]: 2026-01-31 04:18:21.453235558 +0000 UTC m=+0.035403212 container create 277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48 (image=quay.io/ceph/ceph:v20, name=nice_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:18:21 np0005603435 systemd[1]: Started libpod-conmon-277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48.scope.
Jan 30 23:18:21 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ab89c0cad9837233f3595a42f6918c31e16607094c16ddd0bc57f27c8f43c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ab89c0cad9837233f3595a42f6918c31e16607094c16ddd0bc57f27c8f43c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ab89c0cad9837233f3595a42f6918c31e16607094c16ddd0bc57f27c8f43c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:21 np0005603435 podman[77442]: 2026-01-31 04:18:21.521570544 +0000 UTC m=+0.103738218 container init 277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48 (image=quay.io/ceph/ceph:v20, name=nice_lumiere, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:18:21 np0005603435 podman[77442]: 2026-01-31 04:18:21.526807917 +0000 UTC m=+0.108975571 container start 277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48 (image=quay.io/ceph/ceph:v20, name=nice_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:21 np0005603435 podman[77442]: 2026-01-31 04:18:21.530315109 +0000 UTC m=+0.112482783 container attach 277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48 (image=quay.io/ceph/ceph:v20, name=nice_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:21 np0005603435 podman[77442]: 2026-01-31 04:18:21.439294831 +0000 UTC m=+0.021462515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:21 np0005603435 podman[77554]: 2026-01-31 04:18:21.832605609 +0000 UTC m=+0.062763545 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 30 23:18:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4112900365' entity='client.admin' 
Jan 30 23:18:21 np0005603435 systemd[1]: libpod-277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48.scope: Deactivated successfully.
Jan 30 23:18:21 np0005603435 podman[77442]: 2026-01-31 04:18:21.919278605 +0000 UTC m=+0.501446269 container died 277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48 (image=quay.io/ceph/ceph:v20, name=nice_lumiere, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:21 np0005603435 podman[77554]: 2026-01-31 04:18:21.941043866 +0000 UTC m=+0.171201742 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay-02ab89c0cad9837233f3595a42f6918c31e16607094c16ddd0bc57f27c8f43c8-merged.mount: Deactivated successfully.
Jan 30 23:18:21 np0005603435 podman[77442]: 2026-01-31 04:18:21.970728404 +0000 UTC m=+0.552896058 container remove 277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48 (image=quay.io/ceph/ceph:v20, name=nice_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:21 np0005603435 systemd[1]: libpod-conmon-277ae56e9a04d1691aa5323280620845463772290c08cfa7b264c28d07d4ac48.scope: Deactivated successfully.
Jan 30 23:18:22 np0005603435 podman[77607]: 2026-01-31 04:18:22.027711282 +0000 UTC m=+0.039506639 container create 52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d (image=quay.io/ceph/ceph:v20, name=festive_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:18:22 np0005603435 systemd[1]: Started libpod-conmon-52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d.scope.
Jan 30 23:18:22 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1809dfd451fb336156cc7c89488b38df8080aa2ad055b4144f3d281adc5aa391/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1809dfd451fb336156cc7c89488b38df8080aa2ad055b4144f3d281adc5aa391/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1809dfd451fb336156cc7c89488b38df8080aa2ad055b4144f3d281adc5aa391/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:22 np0005603435 podman[77607]: 2026-01-31 04:18:22.009577916 +0000 UTC m=+0.021373283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:22 np0005603435 podman[77607]: 2026-01-31 04:18:22.170999878 +0000 UTC m=+0.182795345 container init 52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d (image=quay.io/ceph/ceph:v20, name=festive_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:22 np0005603435 podman[77607]: 2026-01-31 04:18:22.17621393 +0000 UTC m=+0.188009317 container start 52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d (image=quay.io/ceph/ceph:v20, name=festive_cerf, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:18:22 np0005603435 podman[77607]: 2026-01-31 04:18:22.186821049 +0000 UTC m=+0.198616626 container attach 52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d (image=quay.io/ceph/ceph:v20, name=festive_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:22 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: Saving service mgr spec with placement count:2
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: Saving service crash spec with placement *
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/4112900365' entity='client.admin' 
Jan 30 23:18:22 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 30 23:18:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:22 np0005603435 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77747 (sysctl)
Jan 30 23:18:22 np0005603435 podman[77607]: 2026-01-31 04:18:22.65262674 +0000 UTC m=+0.664422107 container died 52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d (image=quay.io/ceph/ceph:v20, name=festive_cerf, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:18:22 np0005603435 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 30 23:18:22 np0005603435 systemd[1]: libpod-52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d.scope: Deactivated successfully.
Jan 30 23:18:22 np0005603435 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 30 23:18:22 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1809dfd451fb336156cc7c89488b38df8080aa2ad055b4144f3d281adc5aa391-merged.mount: Deactivated successfully.
Jan 30 23:18:22 np0005603435 podman[77607]: 2026-01-31 04:18:22.696212514 +0000 UTC m=+0.708007871 container remove 52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d (image=quay.io/ceph/ceph:v20, name=festive_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:18:22 np0005603435 systemd[1]: libpod-conmon-52a7fa8742a49c10165a6e39b168267c3748361775e48a1dc700e032802c1b2d.scope: Deactivated successfully.
Jan 30 23:18:22 np0005603435 podman[77764]: 2026-01-31 04:18:22.749482125 +0000 UTC m=+0.037136893 container create d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe (image=quay.io/ceph/ceph:v20, name=cranky_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:18:22 np0005603435 systemd[1]: Started libpod-conmon-d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe.scope.
Jan 30 23:18:22 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a491a5996d2e58868b2bf54ea1276b8f97b06a2015d0d726956db0d51450d46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a491a5996d2e58868b2bf54ea1276b8f97b06a2015d0d726956db0d51450d46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a491a5996d2e58868b2bf54ea1276b8f97b06a2015d0d726956db0d51450d46/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:22 np0005603435 podman[77764]: 2026-01-31 04:18:22.823132475 +0000 UTC m=+0.110787263 container init d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe (image=quay.io/ceph/ceph:v20, name=cranky_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 30 23:18:22 np0005603435 podman[77764]: 2026-01-31 04:18:22.731702597 +0000 UTC m=+0.019357415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:22 np0005603435 podman[77764]: 2026-01-31 04:18:22.831835239 +0000 UTC m=+0.119490007 container start d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe (image=quay.io/ceph/ceph:v20, name=cranky_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:22 np0005603435 podman[77764]: 2026-01-31 04:18:22.841845674 +0000 UTC m=+0.129500442 container attach d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe (image=quay.io/ceph/ceph:v20, name=cranky_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:18:22 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:23 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:23 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Added label _admin to host compute-0
Jan 30 23:18:23 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 30 23:18:23 np0005603435 cranky_panini[77785]: Added label _admin to host compute-0
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:23 np0005603435 systemd[1]: libpod-d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe.scope: Deactivated successfully.
Jan 30 23:18:23 np0005603435 podman[77764]: 2026-01-31 04:18:23.233380901 +0000 UTC m=+0.521035669 container died d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe (image=quay.io/ceph/ceph:v20, name=cranky_panini, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:23 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9a491a5996d2e58868b2bf54ea1276b8f97b06a2015d0d726956db0d51450d46-merged.mount: Deactivated successfully.
Jan 30 23:18:23 np0005603435 podman[77764]: 2026-01-31 04:18:23.272476319 +0000 UTC m=+0.560131117 container remove d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe (image=quay.io/ceph/ceph:v20, name=cranky_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:18:23 np0005603435 systemd[1]: libpod-conmon-d4baded263213354adbb015e19419cb0bc46c226f23a21ff4fd965dd39fee9fe.scope: Deactivated successfully.
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:23 np0005603435 podman[77927]: 2026-01-31 04:18:23.418480357 +0000 UTC m=+0.122637230 container create 8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a (image=quay.io/ceph/ceph:v20, name=cranky_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:18:23 np0005603435 podman[77927]: 2026-01-31 04:18:23.331509086 +0000 UTC m=+0.035665979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:23 np0005603435 systemd[1]: Started libpod-conmon-8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a.scope.
Jan 30 23:18:23 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134175ae929fe2b258e549ed36d5260e19b106c41e8464f624f99fb86b17e458/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134175ae929fe2b258e549ed36d5260e19b106c41e8464f624f99fb86b17e458/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134175ae929fe2b258e549ed36d5260e19b106c41e8464f624f99fb86b17e458/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:23 np0005603435 podman[77927]: 2026-01-31 04:18:23.517701128 +0000 UTC m=+0.221858031 container init 8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a (image=quay.io/ceph/ceph:v20, name=cranky_shannon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:23 np0005603435 podman[77927]: 2026-01-31 04:18:23.52588501 +0000 UTC m=+0.230041923 container start 8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a (image=quay.io/ceph/ceph:v20, name=cranky_shannon, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:23 np0005603435 podman[77927]: 2026-01-31 04:18:23.529473375 +0000 UTC m=+0.233630268 container attach 8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a (image=quay.io/ceph/ceph:v20, name=cranky_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:18:23 np0005603435 podman[77988]: 2026-01-31 04:18:23.608982612 +0000 UTC m=+0.054291766 container create dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:23 np0005603435 systemd[1]: Started libpod-conmon-dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b.scope.
Jan 30 23:18:23 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:23 np0005603435 podman[77988]: 2026-01-31 04:18:23.586422242 +0000 UTC m=+0.031731436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:23 np0005603435 podman[77988]: 2026-01-31 04:18:23.703580674 +0000 UTC m=+0.148889908 container init dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:23 np0005603435 podman[77988]: 2026-01-31 04:18:23.709410241 +0000 UTC m=+0.154719405 container start dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:23 np0005603435 peaceful_shtern[78023]: 167 167
Jan 30 23:18:23 np0005603435 systemd[1]: libpod-dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b.scope: Deactivated successfully.
Jan 30 23:18:23 np0005603435 conmon[78023]: conmon dbdf5468c730f4a38f49 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b.scope/container/memory.events
Jan 30 23:18:23 np0005603435 podman[77988]: 2026-01-31 04:18:23.716985439 +0000 UTC m=+0.162294693 container attach dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:18:23 np0005603435 podman[77988]: 2026-01-31 04:18:23.717555132 +0000 UTC m=+0.162864316 container died dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:23 np0005603435 systemd[1]: var-lib-containers-storage-overlay-48d520b3e33a7bcd793debb088ed214718a0a46a33f879e5b8ff1ab8a83217ab-merged.mount: Deactivated successfully.
Jan 30 23:18:23 np0005603435 podman[77988]: 2026-01-31 04:18:23.772579715 +0000 UTC m=+0.217888869 container remove dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:18:23 np0005603435 systemd[1]: libpod-conmon-dbdf5468c730f4a38f4901513b11d2a9c497a0399a17b895528ff1c4352a9e8b.scope: Deactivated successfully.
Jan 30 23:18:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 30 23:18:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2115134027' entity='client.admin' 
Jan 30 23:18:24 np0005603435 cranky_shannon[77970]: set mgr/dashboard/cluster/status
Jan 30 23:18:24 np0005603435 systemd[1]: libpod-8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a.scope: Deactivated successfully.
Jan 30 23:18:24 np0005603435 podman[77927]: 2026-01-31 04:18:24.120727752 +0000 UTC m=+0.824884655 container died 8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a (image=quay.io/ceph/ceph:v20, name=cranky_shannon, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:24 np0005603435 systemd[1]: var-lib-containers-storage-overlay-134175ae929fe2b258e549ed36d5260e19b106c41e8464f624f99fb86b17e458-merged.mount: Deactivated successfully.
Jan 30 23:18:24 np0005603435 podman[77927]: 2026-01-31 04:18:24.199654946 +0000 UTC m=+0.903811859 container remove 8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a (image=quay.io/ceph/ceph:v20, name=cranky_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:24 np0005603435 systemd[1]: libpod-conmon-8c4b05c9cf1408da3450c2afc6bee5dcb5f68f688c84181cad95aa02518be56a.scope: Deactivated successfully.
Jan 30 23:18:24 np0005603435 ceph-mgr[75599]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 30 23:18:24 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:24 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:24 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:24 np0005603435 ceph-mon[75307]: Added label _admin to host compute-0
Jan 30 23:18:24 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2115134027' entity='client.admin' 
Jan 30 23:18:24 np0005603435 podman[78105]: 2026-01-31 04:18:24.642071687 +0000 UTC m=+0.042734094 container create 65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:24 np0005603435 systemd[1]: Started libpod-conmon-65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1.scope.
Jan 30 23:18:24 np0005603435 podman[78105]: 2026-01-31 04:18:24.619784134 +0000 UTC m=+0.020446601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:24 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:24 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47abd8159e0ebc7c52a969f05e1e57bc95db697f72bea54b2f56f90f41be713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:24 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47abd8159e0ebc7c52a969f05e1e57bc95db697f72bea54b2f56f90f41be713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:24 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47abd8159e0ebc7c52a969f05e1e57bc95db697f72bea54b2f56f90f41be713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:24 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47abd8159e0ebc7c52a969f05e1e57bc95db697f72bea54b2f56f90f41be713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:24 np0005603435 podman[78105]: 2026-01-31 04:18:24.75032893 +0000 UTC m=+0.150991397 container init 65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_khayyam, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:18:24 np0005603435 podman[78105]: 2026-01-31 04:18:24.758361079 +0000 UTC m=+0.159023506 container start 65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:24 np0005603435 podman[78105]: 2026-01-31 04:18:24.763555721 +0000 UTC m=+0.164218148 container attach 65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:24 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:24 np0005603435 python3[78152]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:18:25 np0005603435 podman[78156]: 2026-01-31 04:18:25.055515198 +0000 UTC m=+0.047568408 container create 8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f (image=quay.io/ceph/ceph:v20, name=fervent_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:18:25 np0005603435 systemd[1]: Started libpod-conmon-8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f.scope.
Jan 30 23:18:25 np0005603435 podman[78156]: 2026-01-31 04:18:25.027325796 +0000 UTC m=+0.019379036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:25 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6a03812e13fd06eea60cad2fc3028bb3e88be30dacaf577365a21fa4f58da2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6a03812e13fd06eea60cad2fc3028bb3e88be30dacaf577365a21fa4f58da2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:25 np0005603435 podman[78156]: 2026-01-31 04:18:25.158579869 +0000 UTC m=+0.150633089 container init 8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f (image=quay.io/ceph/ceph:v20, name=fervent_austin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 30 23:18:25 np0005603435 podman[78156]: 2026-01-31 04:18:25.163686129 +0000 UTC m=+0.155739339 container start 8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f (image=quay.io/ceph/ceph:v20, name=fervent_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 30 23:18:25 np0005603435 podman[78156]: 2026-01-31 04:18:25.186517615 +0000 UTC m=+0.178570855 container attach 8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f (image=quay.io/ceph/ceph:v20, name=fervent_austin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]: [
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:    {
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "available": false,
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "being_replaced": false,
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "ceph_device_lvm": false,
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "lsm_data": {},
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "lvs": [],
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "path": "/dev/sr0",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "rejected_reasons": [
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "Has a FileSystem",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "Insufficient space (<5GB)"
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        ],
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        "sys_api": {
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "actuators": null,
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "device_nodes": [
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:                "sr0"
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            ],
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "devname": "sr0",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "human_readable_size": "482.00 KB",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "id_bus": "ata",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "model": "QEMU DVD-ROM",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "nr_requests": "2",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "parent": "/dev/sr0",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "partitions": {},
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "path": "/dev/sr0",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "removable": "1",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "rev": "2.5+",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "ro": "0",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "rotational": "1",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "sas_address": "",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "sas_device_handle": "",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "scheduler_mode": "mq-deadline",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "sectors": 0,
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "sectorsize": "2048",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "size": 493568.0,
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "support_discard": "2048",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "type": "disk",
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:            "vendor": "QEMU"
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:        }
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]:    }
Jan 30 23:18:25 np0005603435 charming_khayyam[78121]: ]
Jan 30 23:18:25 np0005603435 systemd[1]: libpod-65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1.scope: Deactivated successfully.
Jan 30 23:18:25 np0005603435 podman[78105]: 2026-01-31 04:18:25.252940836 +0000 UTC m=+0.653603253 container died 65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_khayyam, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:18:25 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a47abd8159e0ebc7c52a969f05e1e57bc95db697f72bea54b2f56f90f41be713-merged.mount: Deactivated successfully.
Jan 30 23:18:25 np0005603435 podman[78105]: 2026-01-31 04:18:25.491343135 +0000 UTC m=+0.892005592 container remove 65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_khayyam, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:25 np0005603435 systemd[1]: libpod-conmon-65d5492736a28208efca2381960663a2efc2f5985f44d91e0c317ac9ddbc4db1.scope: Deactivated successfully.
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:18:25 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 30 23:18:25 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 30 23:18:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239621115' entity='client.admin' 
Jan 30 23:18:25 np0005603435 systemd[1]: libpod-8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f.scope: Deactivated successfully.
Jan 30 23:18:25 np0005603435 podman[78156]: 2026-01-31 04:18:25.634052707 +0000 UTC m=+0.626105917 container died 8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f (image=quay.io/ceph/ceph:v20, name=fervent_austin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:18:25 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ef6a03812e13fd06eea60cad2fc3028bb3e88be30dacaf577365a21fa4f58da2-merged.mount: Deactivated successfully.
Jan 30 23:18:25 np0005603435 podman[78156]: 2026-01-31 04:18:25.669837788 +0000 UTC m=+0.661890998 container remove 8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f (image=quay.io/ceph/ceph:v20, name=fervent_austin, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:18:25 np0005603435 systemd[1]: libpod-conmon-8736cb00b6c6cab50c496b03bdfbfb4dcfa96818d24c4220762ccfa5bf5ca29f.scope: Deactivated successfully.
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/95d2f419-0dd0-56f2-a094-353f8c7597ed/config/ceph.conf
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/95d2f419-0dd0-56f2-a094-353f8c7597ed/config/ceph.conf
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 30 23:18:26 np0005603435 ansible-async_wrapper.py[79348]: Invoked with j48200066562 30 /home/zuul/.ansible/tmp/ansible-tmp-1769833106.006205-36710-36772066540280/AnsiballZ_command.py _
Jan 30 23:18:26 np0005603435 ansible-async_wrapper.py[79479]: Starting module and watcher
Jan 30 23:18:26 np0005603435 ansible-async_wrapper.py[79479]: Start watching 79480 (30)
Jan 30 23:18:26 np0005603435 ansible-async_wrapper.py[79480]: Start module (79480)
Jan 30 23:18:26 np0005603435 ansible-async_wrapper.py[79348]: Return async_wrapper task started.
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: Updating compute-0:/etc/ceph/ceph.conf
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3239621115' entity='client.admin' 
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: Updating compute-0:/var/lib/ceph/95d2f419-0dd0-56f2-a094-353f8c7597ed/config/ceph.conf
Jan 30 23:18:26 np0005603435 ceph-mon[75307]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 30 23:18:26 np0005603435 python3[79481]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:18:26 np0005603435 podman[79562]: 2026-01-31 04:18:26.756313277 +0000 UTC m=+0.039302124 container create 98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd (image=quay.io/ceph/ceph:v20, name=nice_ptolemy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:26 np0005603435 systemd[1]: Started libpod-conmon-98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd.scope.
Jan 30 23:18:26 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373a0698aee1e7b7cb6ad96d5986c7c7123e55c2dadb88c8ed1cede17940f5ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373a0698aee1e7b7cb6ad96d5986c7c7123e55c2dadb88c8ed1cede17940f5ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:26 np0005603435 podman[79562]: 2026-01-31 04:18:26.833726085 +0000 UTC m=+0.116714932 container init 98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd (image=quay.io/ceph/ceph:v20, name=nice_ptolemy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:18:26 np0005603435 podman[79562]: 2026-01-31 04:18:26.740414334 +0000 UTC m=+0.023403211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:26 np0005603435 podman[79562]: 2026-01-31 04:18:26.838837625 +0000 UTC m=+0.121826472 container start 98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd (image=quay.io/ceph/ceph:v20, name=nice_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:26 np0005603435 podman[79562]: 2026-01-31 04:18:26.841942778 +0000 UTC m=+0.124931655 container attach 98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd (image=quay.io/ceph/ceph:v20, name=nice_ptolemy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/95d2f419-0dd0-56f2-a094-353f8c7597ed/config/ceph.client.admin.keyring
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/95d2f419-0dd0-56f2-a094-353f8c7597ed/config/ceph.client.admin.keyring
Jan 30 23:18:26 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:27 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 30 23:18:27 np0005603435 nice_ptolemy[79622]: 
Jan 30 23:18:27 np0005603435 nice_ptolemy[79622]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 30 23:18:27 np0005603435 systemd[1]: libpod-98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd.scope: Deactivated successfully.
Jan 30 23:18:27 np0005603435 podman[79562]: 2026-01-31 04:18:27.249746436 +0000 UTC m=+0.532735283 container died 98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd (image=quay.io/ceph/ceph:v20, name=nice_ptolemy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:27 np0005603435 systemd[1]: var-lib-containers-storage-overlay-373a0698aee1e7b7cb6ad96d5986c7c7123e55c2dadb88c8ed1cede17940f5ef-merged.mount: Deactivated successfully.
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:27 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev d419c837-f5e9-4027-9dea-6b9be4ef92fa (Updating crash deployment (+1 -> 1))
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 30 23:18:27 np0005603435 podman[79562]: 2026-01-31 04:18:27.343715613 +0000 UTC m=+0.626704460 container remove 98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd (image=quay.io/ceph/ceph:v20, name=nice_ptolemy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:27 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 30 23:18:27 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 30 23:18:27 np0005603435 systemd[1]: libpod-conmon-98521f09a55ffbfa2a1308d7c8be7b5039b01143965ffa17fa6f3f4e1b2d68bd.scope: Deactivated successfully.
Jan 30 23:18:27 np0005603435 ansible-async_wrapper.py[79480]: Module complete (79480)
Jan 30 23:18:27 np0005603435 podman[80049]: 2026-01-31 04:18:27.784384454 +0000 UTC m=+0.044408585 container create 7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cerf, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:18:27 np0005603435 podman[80049]: 2026-01-31 04:18:27.758603608 +0000 UTC m=+0.018627759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:27 np0005603435 systemd[1]: Started libpod-conmon-7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383.scope.
Jan 30 23:18:27 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:27 np0005603435 podman[80049]: 2026-01-31 04:18:27.912438031 +0000 UTC m=+0.172462182 container init 7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:18:27 np0005603435 podman[80049]: 2026-01-31 04:18:27.918440042 +0000 UTC m=+0.178464213 container start 7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cerf, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:18:27 np0005603435 podman[80049]: 2026-01-31 04:18:27.921608257 +0000 UTC m=+0.181632378 container attach 7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:27 np0005603435 wizardly_cerf[80091]: 167 167
Jan 30 23:18:27 np0005603435 systemd[1]: libpod-7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383.scope: Deactivated successfully.
Jan 30 23:18:27 np0005603435 podman[80049]: 2026-01-31 04:18:27.922506548 +0000 UTC m=+0.182530679 container died 7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:18:27 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5399fe11e446bdefca50b2e49aa7f758a593dfdac7e99818c8ff82e42180d969-merged.mount: Deactivated successfully.
Jan 30 23:18:27 np0005603435 podman[80049]: 2026-01-31 04:18:27.959277331 +0000 UTC m=+0.219301452 container remove 7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cerf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:27 np0005603435 systemd[1]: libpod-conmon-7eb4c2171c7835849dab336c20ffef505d2e33c8f2b5e48b388bbe1a5a18b383.scope: Deactivated successfully.
Jan 30 23:18:27 np0005603435 python3[80088]: ansible-ansible.legacy.async_status Invoked with jid=j48200066562.79348 mode=status _async_dir=/root/.ansible_async
Jan 30 23:18:27 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:28 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:28 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:28 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: Updating compute-0:/var/lib/ceph/95d2f419-0dd0-56f2-a094-353f8c7597ed/config/ceph.client.admin.keyring
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: Deploying daemon crash.compute-0 on compute-0
Jan 30 23:18:28 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:28 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:28 np0005603435 python3[80194]: ansible-ansible.legacy.async_status Invoked with jid=j48200066562.79348 mode=cleanup _async_dir=/root/.ansible_async
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:28 np0005603435 systemd[1]: Starting Ceph crash.compute-0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:18:28 np0005603435 podman[80284]: 2026-01-31 04:18:28.739091558 +0000 UTC m=+0.057081322 container create 7b5bf0c3978a4dd879333a9f6034753b27e727b8a1d4649f9df8faa4c5b8b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc482ddf72af70b62463f890ac86eb9d331efb9544c28da0178d731391124baa/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc482ddf72af70b62463f890ac86eb9d331efb9544c28da0178d731391124baa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc482ddf72af70b62463f890ac86eb9d331efb9544c28da0178d731391124baa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc482ddf72af70b62463f890ac86eb9d331efb9544c28da0178d731391124baa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:28 np0005603435 podman[80284]: 2026-01-31 04:18:28.71407138 +0000 UTC m=+0.032061194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:28 np0005603435 podman[80284]: 2026-01-31 04:18:28.817786206 +0000 UTC m=+0.135776020 container init 7b5bf0c3978a4dd879333a9f6034753b27e727b8a1d4649f9df8faa4c5b8b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:18:28 np0005603435 podman[80284]: 2026-01-31 04:18:28.828005466 +0000 UTC m=+0.145995220 container start 7b5bf0c3978a4dd879333a9f6034753b27e727b8a1d4649f9df8faa4c5b8b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:28 np0005603435 bash[80284]: 7b5bf0c3978a4dd879333a9f6034753b27e727b8a1d4649f9df8faa4c5b8b80b
Jan 30 23:18:28 np0005603435 systemd[1]: Started Ceph crash.compute-0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:28 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 30 23:18:28 np0005603435 python3[80323]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:28 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev d419c837-f5e9-4027-9dea-6b9be4ef92fa (Updating crash deployment (+1 -> 1))
Jan 30 23:18:28 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event d419c837-f5e9-4027-9dea-6b9be4ef92fa (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 30 23:18:28 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:28 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev ccabe0d2-49dc-4b08-8cf0-7c44c6e61436 (Updating mgr deployment (+1 -> 2))
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.uknvyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.uknvyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uknvyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mgr services"} : dispatch
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:28 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.uknvyv on compute-0
Jan 30 23:18:28 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.uknvyv on compute-0
Jan 30 23:18:29 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: 2026-01-31T04:18:29.016+0000 7ffa608cc640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 30 23:18:29 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: 2026-01-31T04:18:29.016+0000 7ffa608cc640 -1 AuthRegistry(0x7ffa58052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 30 23:18:29 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: 2026-01-31T04:18:29.017+0000 7ffa608cc640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 30 23:18:29 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: 2026-01-31T04:18:29.017+0000 7ffa608cc640 -1 AuthRegistry(0x7ffa608cafe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 30 23:18:29 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: 2026-01-31T04:18:29.017+0000 7ffa5e641640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 30 23:18:29 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: 2026-01-31T04:18:29.017+0000 7ffa608cc640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 30 23:18:29 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 30 23:18:29 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-crash-compute-0[80326]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 30 23:18:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.uknvyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 30 23:18:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uknvyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 30 23:18:29 np0005603435 python3[80427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:18:29 np0005603435 podman[80463]: 2026-01-31 04:18:29.40973481 +0000 UTC m=+0.041916346 container create a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 30 23:18:29 np0005603435 systemd[1]: Started libpod-conmon-a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac.scope.
Jan 30 23:18:29 np0005603435 podman[80477]: 2026-01-31 04:18:29.451979302 +0000 UTC m=+0.043174125 container create 7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168 (image=quay.io/ceph/ceph:v20, name=vibrant_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:29 np0005603435 podman[80463]: 2026-01-31 04:18:29.386109925 +0000 UTC m=+0.018291511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:29 np0005603435 systemd[1]: Started libpod-conmon-7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168.scope.
Jan 30 23:18:29 np0005603435 podman[80463]: 2026-01-31 04:18:29.497618584 +0000 UTC m=+0.129800210 container init a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:18:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbe59e7f17410a20dd2dde6a2322c6998f8752f53c7b513c45a31eb9f04dd7b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbe59e7f17410a20dd2dde6a2322c6998f8752f53c7b513c45a31eb9f04dd7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcbe59e7f17410a20dd2dde6a2322c6998f8752f53c7b513c45a31eb9f04dd7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:29 np0005603435 podman[80463]: 2026-01-31 04:18:29.503565904 +0000 UTC m=+0.135747430 container start a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:29 np0005603435 podman[80463]: 2026-01-31 04:18:29.510944457 +0000 UTC m=+0.143126163 container attach a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:29 np0005603435 elastic_keldysh[80492]: 167 167
Jan 30 23:18:29 np0005603435 podman[80477]: 2026-01-31 04:18:29.429504204 +0000 UTC m=+0.020699077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:29 np0005603435 systemd[1]: libpod-a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac.scope: Deactivated successfully.
Jan 30 23:18:29 np0005603435 podman[80463]: 2026-01-31 04:18:29.527859855 +0000 UTC m=+0.160041401 container died a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:18:29 np0005603435 podman[80477]: 2026-01-31 04:18:29.574677784 +0000 UTC m=+0.165872607 container init 7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168 (image=quay.io/ceph/ceph:v20, name=vibrant_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:18:29 np0005603435 podman[80477]: 2026-01-31 04:18:29.580852549 +0000 UTC m=+0.172047402 container start 7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168 (image=quay.io/ceph/ceph:v20, name=vibrant_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:18:29 np0005603435 podman[80477]: 2026-01-31 04:18:29.596252471 +0000 UTC m=+0.187447304 container attach 7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168 (image=quay.io/ceph/ceph:v20, name=vibrant_mcclintock, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:18:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a64d20b062b0ca3c8837c05eccde24c3d78a40fbed59873c7113748f7d068e6e-merged.mount: Deactivated successfully.
Jan 30 23:18:29 np0005603435 podman[80463]: 2026-01-31 04:18:29.740836777 +0000 UTC m=+0.373018323 container remove a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:29 np0005603435 systemd[1]: libpod-conmon-a249b6a7a0da60c6c4332633726fe1bcdeb86d3066608f68d886138b506d98ac.scope: Deactivated successfully.
Jan 30 23:18:29 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:29 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:29 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:29 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 30 23:18:29 np0005603435 vibrant_mcclintock[80497]: 
Jan 30 23:18:29 np0005603435 vibrant_mcclintock[80497]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 30 23:18:30 np0005603435 podman[80573]: 2026-01-31 04:18:30.039117373 +0000 UTC m=+0.024695981 container died 7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168 (image=quay.io/ceph/ceph:v20, name=vibrant_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:18:30 np0005603435 systemd[1]: libpod-7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168.scope: Deactivated successfully.
Jan 30 23:18:30 np0005603435 systemd[1]: var-lib-containers-storage-overlay-dcbe59e7f17410a20dd2dde6a2322c6998f8752f53c7b513c45a31eb9f04dd7b-merged.mount: Deactivated successfully.
Jan 30 23:18:30 np0005603435 podman[80573]: 2026-01-31 04:18:30.12071281 +0000 UTC m=+0.106291458 container remove 7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168 (image=quay.io/ceph/ceph:v20, name=vibrant_mcclintock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:30 np0005603435 systemd[1]: libpod-conmon-7762a19015d95bf1bd08da200953efe4966d9f81478111f128a8116bbbeca168.scope: Deactivated successfully.
Jan 30 23:18:30 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:30 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:30 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:30 np0005603435 systemd[1]: Starting Ceph mgr.compute-0.uknvyv for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:18:30 np0005603435 ceph-mon[75307]: Deploying daemon mgr.compute-0.uknvyv on compute-0
Jan 30 23:18:30 np0005603435 python3[80659]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:18:30 np0005603435 podman[80706]: 2026-01-31 04:18:30.665037294 +0000 UTC m=+0.083275087 container create bd0b9558020bad42b45589f62219c698b58ed75204bf00c3b27c884dc2e25f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:18:30 np0005603435 podman[80706]: 2026-01-31 04:18:30.616650087 +0000 UTC m=+0.034887860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27b2120b774c687ce013ed379997b014dad9b191b17143a3183dc1cc33ddaaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27b2120b774c687ce013ed379997b014dad9b191b17143a3183dc1cc33ddaaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27b2120b774c687ce013ed379997b014dad9b191b17143a3183dc1cc33ddaaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b27b2120b774c687ce013ed379997b014dad9b191b17143a3183dc1cc33ddaaa/merged/var/lib/ceph/mgr/ceph-compute-0.uknvyv supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:30 np0005603435 podman[80712]: 2026-01-31 04:18:30.635666784 +0000 UTC m=+0.033591740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:30 np0005603435 podman[80712]: 2026-01-31 04:18:30.739249077 +0000 UTC m=+0.137174003 container create 306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b (image=quay.io/ceph/ceph:v20, name=youthful_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:18:30 np0005603435 podman[80706]: 2026-01-31 04:18:30.787151092 +0000 UTC m=+0.205388935 container init bd0b9558020bad42b45589f62219c698b58ed75204bf00c3b27c884dc2e25f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:30 np0005603435 podman[80706]: 2026-01-31 04:18:30.793676105 +0000 UTC m=+0.211913898 container start bd0b9558020bad42b45589f62219c698b58ed75204bf00c3b27c884dc2e25f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:18:30 np0005603435 bash[80706]: bd0b9558020bad42b45589f62219c698b58ed75204bf00c3b27c884dc2e25f1e
Jan 30 23:18:30 np0005603435 systemd[1]: Started Ceph mgr.compute-0.uknvyv for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:18:30 np0005603435 ceph-mgr[80738]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:18:30 np0005603435 ceph-mgr[80738]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 30 23:18:30 np0005603435 ceph-mgr[80738]: pidfile_write: ignore empty --pid-file
Jan 30 23:18:30 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'alerts'
Jan 30 23:18:30 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:30 np0005603435 systemd[1]: Started libpod-conmon-306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b.scope.
Jan 30 23:18:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205e05af62cb66420dcdffef4589618dabfaefc23a77fc125069c45ef68c5fcf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205e05af62cb66420dcdffef4589618dabfaefc23a77fc125069c45ef68c5fcf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205e05af62cb66420dcdffef4589618dabfaefc23a77fc125069c45ef68c5fcf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:31 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'balancer'
Jan 30 23:18:31 np0005603435 podman[80712]: 2026-01-31 04:18:31.053513118 +0000 UTC m=+0.451438004 container init 306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b (image=quay.io/ceph/ceph:v20, name=youthful_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:18:31 np0005603435 podman[80712]: 2026-01-31 04:18:31.062720945 +0000 UTC m=+0.460645841 container start 306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b (image=quay.io/ceph/ceph:v20, name=youthful_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 30 23:18:31 np0005603435 podman[80712]: 2026-01-31 04:18:31.133342613 +0000 UTC m=+0.531267529 container attach 306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b (image=quay.io/ceph/ceph:v20, name=youthful_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:18:31 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'cephadm'
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:31 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev ccabe0d2-49dc-4b08-8cf0-7c44c6e61436 (Updating mgr deployment (+1 -> 2))
Jan 30 23:18:31 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event ccabe0d2-49dc-4b08-8cf0-7c44c6e61436 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/179659505' entity='client.admin' 
Jan 30 23:18:31 np0005603435 systemd[1]: libpod-306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b.scope: Deactivated successfully.
Jan 30 23:18:31 np0005603435 podman[80712]: 2026-01-31 04:18:31.528067935 +0000 UTC m=+0.925992841 container died 306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b (image=quay.io/ceph/ceph:v20, name=youthful_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:31 np0005603435 ansible-async_wrapper.py[79479]: Done in kid B.
Jan 30 23:18:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-205e05af62cb66420dcdffef4589618dabfaefc23a77fc125069c45ef68c5fcf-merged.mount: Deactivated successfully.
Jan 30 23:18:31 np0005603435 podman[80712]: 2026-01-31 04:18:31.634872103 +0000 UTC m=+1.032797009 container remove 306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b (image=quay.io/ceph/ceph:v20, name=youthful_keller, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:31 np0005603435 systemd[1]: libpod-conmon-306724406aeb0744210836cd14fe3f3c4d25426115e0fac27900eefa1bb6326b.scope: Deactivated successfully.
Jan 30 23:18:31 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'crash'
Jan 30 23:18:31 np0005603435 podman[80954]: 2026-01-31 04:18:31.845400758 +0000 UTC m=+0.048111001 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 30 23:18:31 np0005603435 ceph-mgr[75599]: [progress INFO root] Writing back 2 completed events
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 30 23:18:31 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'dashboard'
Jan 30 23:18:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:31 np0005603435 python3[80962]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:18:31 np0005603435 podman[80954]: 2026-01-31 04:18:31.953583209 +0000 UTC m=+0.156293432 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 30 23:18:31 np0005603435 podman[80977]: 2026-01-31 04:18:31.973138849 +0000 UTC m=+0.035251009 container create 65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81 (image=quay.io/ceph/ceph:v20, name=kind_boyd, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:18:31 np0005603435 systemd[1]: Started libpod-conmon-65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81.scope.
Jan 30 23:18:32 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff5d38be248c6f41c2face40d47157e9b03b9dad3bcf66d41e8f9a55af3f33b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff5d38be248c6f41c2face40d47157e9b03b9dad3bcf66d41e8f9a55af3f33b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff5d38be248c6f41c2face40d47157e9b03b9dad3bcf66d41e8f9a55af3f33b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:32 np0005603435 podman[80977]: 2026-01-31 04:18:32.042070068 +0000 UTC m=+0.104182268 container init 65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81 (image=quay.io/ceph/ceph:v20, name=kind_boyd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:32 np0005603435 podman[80977]: 2026-01-31 04:18:32.046648905 +0000 UTC m=+0.108761075 container start 65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81 (image=quay.io/ceph/ceph:v20, name=kind_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:32 np0005603435 podman[80977]: 2026-01-31 04:18:32.049063942 +0000 UTC m=+0.111176122 container attach 65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81 (image=quay.io/ceph/ceph:v20, name=kind_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:18:32 np0005603435 podman[80977]: 2026-01-31 04:18:31.958831582 +0000 UTC m=+0.020943772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:32 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 30 23:18:32 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:32 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 30 23:18:32 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3122833667' entity='client.admin' 
Jan 30 23:18:32 np0005603435 systemd[1]: libpod-65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81.scope: Deactivated successfully.
Jan 30 23:18:32 np0005603435 podman[81158]: 2026-01-31 04:18:32.474674769 +0000 UTC m=+0.020904912 container died 65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81 (image=quay.io/ceph/ceph:v20, name=kind_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:18:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay-aff5d38be248c6f41c2face40d47157e9b03b9dad3bcf66d41e8f9a55af3f33b-merged.mount: Deactivated successfully.
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/179659505' entity='client.admin' 
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3122833667' entity='client.admin' 
Jan 30 23:18:32 np0005603435 podman[81158]: 2026-01-31 04:18:32.512351774 +0000 UTC m=+0.058581917 container remove 65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81 (image=quay.io/ceph/ceph:v20, name=kind_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:32 np0005603435 systemd[1]: libpod-conmon-65720f6b98e36fbd77864dbd1c63bcdd0f93e3950eaea072f2666be6a2911f81.scope: Deactivated successfully.
Jan 30 23:18:32 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'devicehealth'
Jan 30 23:18:32 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'diskprediction_local'
Jan 30 23:18:32 np0005603435 podman[81227]: 2026-01-31 04:18:32.741756222 +0000 UTC m=+0.036934369 container create 4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b (image=quay.io/ceph/ceph:v20, name=vigilant_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 30 23:18:32 np0005603435 systemd[1]: Started libpod-conmon-4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b.scope.
Jan 30 23:18:32 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:32 np0005603435 podman[81227]: 2026-01-31 04:18:32.806319288 +0000 UTC m=+0.101497435 container init 4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b (image=quay.io/ceph/ceph:v20, name=vigilant_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:32 np0005603435 podman[81227]: 2026-01-31 04:18:32.811995102 +0000 UTC m=+0.107173249 container start 4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b (image=quay.io/ceph/ceph:v20, name=vigilant_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:32 np0005603435 podman[81227]: 2026-01-31 04:18:32.815511474 +0000 UTC m=+0.110689631 container attach 4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b (image=quay.io/ceph/ceph:v20, name=vigilant_gates, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:32 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv[80734]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 30 23:18:32 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv[80734]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 30 23:18:32 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv[80734]:  from numpy import show_config as show_numpy_config
Jan 30 23:18:32 np0005603435 vigilant_gates[81256]: 167 167
Jan 30 23:18:32 np0005603435 systemd[1]: libpod-4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b.scope: Deactivated successfully.
Jan 30 23:18:32 np0005603435 podman[81227]: 2026-01-31 04:18:32.817668625 +0000 UTC m=+0.112846792 container died 4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b (image=quay.io/ceph/ceph:v20, name=vigilant_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:32 np0005603435 podman[81227]: 2026-01-31 04:18:32.726269708 +0000 UTC m=+0.021447845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:32 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'influx'
Jan 30 23:18:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f2034fcfeec110d594037088e599af814457878af64d2b7efd8bb912ed5100ff-merged.mount: Deactivated successfully.
Jan 30 23:18:32 np0005603435 podman[81227]: 2026-01-31 04:18:32.853478826 +0000 UTC m=+0.148657003 container remove 4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b (image=quay.io/ceph/ceph:v20, name=vigilant_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:18:32 np0005603435 systemd[1]: libpod-conmon-4b3594663c3fd98ddf4ed4b1bab7f672b11b40df7d82f33d2a8c141fc929a38b.scope: Deactivated successfully.
Jan 30 23:18:32 np0005603435 python3[81253]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:18:32 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'insights'
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:32 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:32 np0005603435 podman[81273]: 2026-01-31 04:18:32.942592239 +0000 UTC m=+0.058828333 container create d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5 (image=quay.io/ceph/ceph:v20, name=sweet_banzai, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:32 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'iostat'
Jan 30 23:18:32 np0005603435 podman[81273]: 2026-01-31 04:18:32.901265048 +0000 UTC m=+0.017501152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:33 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.wyngmr (unknown last config time)...
Jan 30 23:18:33 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.wyngmr (unknown last config time)...
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.wyngmr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.wyngmr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mgr services"} : dispatch
Jan 30 23:18:33 np0005603435 systemd[1]: Started libpod-conmon-d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5.scope.
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:33 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.wyngmr on compute-0
Jan 30 23:18:33 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.wyngmr on compute-0
Jan 30 23:18:33 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'k8sevents'
Jan 30 23:18:33 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1667168c6f37d0c8be5b947a1dfa8011bfb3c4b0cbf80233cdfd6c0e0b64079/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1667168c6f37d0c8be5b947a1dfa8011bfb3c4b0cbf80233cdfd6c0e0b64079/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1667168c6f37d0c8be5b947a1dfa8011bfb3c4b0cbf80233cdfd6c0e0b64079/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:33 np0005603435 podman[81273]: 2026-01-31 04:18:33.047182136 +0000 UTC m=+0.163418210 container init d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5 (image=quay.io/ceph/ceph:v20, name=sweet_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:33 np0005603435 podman[81273]: 2026-01-31 04:18:33.051846075 +0000 UTC m=+0.168082149 container start d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5 (image=quay.io/ceph/ceph:v20, name=sweet_banzai, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 30 23:18:33 np0005603435 podman[81273]: 2026-01-31 04:18:33.056547216 +0000 UTC m=+0.172783320 container attach d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5 (image=quay.io/ceph/ceph:v20, name=sweet_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:18:33 np0005603435 podman[81376]: 2026-01-31 04:18:33.328588056 +0000 UTC m=+0.034384659 container create 29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:18:33 np0005603435 systemd[1]: Started libpod-conmon-29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984.scope.
Jan 30 23:18:33 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'localpool'
Jan 30 23:18:33 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:33 np0005603435 podman[81376]: 2026-01-31 04:18:33.388107523 +0000 UTC m=+0.093904156 container init 29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:33 np0005603435 podman[81376]: 2026-01-31 04:18:33.392843925 +0000 UTC m=+0.098640528 container start 29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:18:33 np0005603435 affectionate_brown[81392]: 167 167
Jan 30 23:18:33 np0005603435 systemd[1]: libpod-29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984.scope: Deactivated successfully.
Jan 30 23:18:33 np0005603435 podman[81376]: 2026-01-31 04:18:33.396351657 +0000 UTC m=+0.102148270 container attach 29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:18:33 np0005603435 podman[81376]: 2026-01-31 04:18:33.39691403 +0000 UTC m=+0.102710633 container died 29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:33 np0005603435 podman[81376]: 2026-01-31 04:18:33.313366108 +0000 UTC m=+0.019162741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:33 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'mds_autoscaler'
Jan 30 23:18:33 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5e4b8fe83b32b51fdb882b54ac3b0ab0dd63eb0f9df9f1547ff0613b9ba1f0b6-merged.mount: Deactivated successfully.
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2665442639' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:33 np0005603435 podman[81376]: 2026-01-31 04:18:33.526134915 +0000 UTC m=+0.231931548 container remove 29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984 (image=quay.io/ceph/ceph:v20, name=affectionate_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:33 np0005603435 systemd[1]: libpod-conmon-29a3dc66cca6f95bd151cde1e852db8c097e5fa43b620bb5b11a74194d932984.scope: Deactivated successfully.
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:33 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'mirroring'
Jan 30 23:18:33 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'nfs'
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: Reconfiguring mgr.compute-0.wyngmr (unknown last config time)...
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.wyngmr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: Reconfiguring daemon mgr.compute-0.wyngmr on compute-0
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2665442639' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:33 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'orchestrator'
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2665442639' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 30 23:18:34 np0005603435 sweet_banzai[81287]: set require_min_compat_client to mimic
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 30 23:18:34 np0005603435 systemd[1]: libpod-d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5.scope: Deactivated successfully.
Jan 30 23:18:34 np0005603435 podman[81273]: 2026-01-31 04:18:34.038375427 +0000 UTC m=+1.154611511 container died d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5 (image=quay.io/ceph/ceph:v20, name=sweet_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:18:34 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e1667168c6f37d0c8be5b947a1dfa8011bfb3c4b0cbf80233cdfd6c0e0b64079-merged.mount: Deactivated successfully.
Jan 30 23:18:34 np0005603435 podman[81273]: 2026-01-31 04:18:34.08279014 +0000 UTC m=+1.199026224 container remove d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5 (image=quay.io/ceph/ceph:v20, name=sweet_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:18:34 np0005603435 systemd[1]: libpod-conmon-d384c5679437702babde0ad58574e3610fe4258fb06a0b1af5ed896b8ebeb8e5.scope: Deactivated successfully.
Jan 30 23:18:34 np0005603435 podman[81505]: 2026-01-31 04:18:34.140774391 +0000 UTC m=+0.081046633 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:34 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'osd_perf_query'
Jan 30 23:18:34 np0005603435 podman[81505]: 2026-01-31 04:18:34.252510516 +0000 UTC m=+0.192782748 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:18:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:34 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'osd_support'
Jan 30 23:18:34 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'pg_autoscaler'
Jan 30 23:18:34 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'progress'
Jan 30 23:18:34 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'prometheus'
Jan 30 23:18:34 np0005603435 python3[81631]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:18:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:34 np0005603435 podman[81657]: 2026-01-31 04:18:34.690613206 +0000 UTC m=+0.040142354 container create 46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f (image=quay.io/ceph/ceph:v20, name=inspiring_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:34 np0005603435 systemd[1]: Started libpod-conmon-46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f.scope.
Jan 30 23:18:34 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc24bcb73d8c3ba6c60eef94cd68ea54267bf1ec4160afe3182c3eaf9e832f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc24bcb73d8c3ba6c60eef94cd68ea54267bf1ec4160afe3182c3eaf9e832f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc24bcb73d8c3ba6c60eef94cd68ea54267bf1ec4160afe3182c3eaf9e832f1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:34 np0005603435 podman[81657]: 2026-01-31 04:18:34.672863749 +0000 UTC m=+0.022392917 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:34 np0005603435 podman[81657]: 2026-01-31 04:18:34.771567277 +0000 UTC m=+0.121096445 container init 46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f (image=quay.io/ceph/ceph:v20, name=inspiring_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:18:34 np0005603435 podman[81657]: 2026-01-31 04:18:34.777290952 +0000 UTC m=+0.126820100 container start 46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f (image=quay.io/ceph/ceph:v20, name=inspiring_fermat, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:34 np0005603435 podman[81657]: 2026-01-31 04:18:34.781138282 +0000 UTC m=+0.130667460 container attach 46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f (image=quay.io/ceph/ceph:v20, name=inspiring_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:18:34 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'rbd_support'
Jan 30 23:18:34 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:34 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'rgw'
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2665442639' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'rook'
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Added host compute-0
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 inspiring_fermat[81695]: Added host 'compute-0' with addr '192.168.122.100'
Jan 30 23:18:35 np0005603435 inspiring_fermat[81695]: Scheduled mon update...
Jan 30 23:18:35 np0005603435 inspiring_fermat[81695]: Scheduled mgr update...
Jan 30 23:18:35 np0005603435 inspiring_fermat[81695]: Scheduled osd.default_drive_group update...
Jan 30 23:18:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 269cbc23-01a6-4866-becf-c0bf0684d68a (Updating mgr deployment (-1 -> 1))
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.uknvyv from compute-0 -- ports [8765]
Jan 30 23:18:35 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.uknvyv from compute-0 -- ports [8765]
Jan 30 23:18:35 np0005603435 systemd[1]: libpod-46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f.scope: Deactivated successfully.
Jan 30 23:18:35 np0005603435 podman[81657]: 2026-01-31 04:18:35.676501782 +0000 UTC m=+1.026030940 container died 46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f (image=quay.io/ceph/ceph:v20, name=inspiring_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:18:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8bc24bcb73d8c3ba6c60eef94cd68ea54267bf1ec4160afe3182c3eaf9e832f1-merged.mount: Deactivated successfully.
Jan 30 23:18:35 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'selftest'
Jan 30 23:18:35 np0005603435 podman[81657]: 2026-01-31 04:18:35.72493442 +0000 UTC m=+1.074463598 container remove 46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f (image=quay.io/ceph/ceph:v20, name=inspiring_fermat, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:35 np0005603435 systemd[1]: libpod-conmon-46f77b853b27c7ae473c603f1c7fb711d5c3f610ed7dafa304d6af0a45cccf6f.scope: Deactivated successfully.
Jan 30 23:18:35 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'smb'
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:36 np0005603435 systemd[1]: Stopping Ceph mgr.compute-0.uknvyv for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:18:36 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'snap_schedule'
Jan 30 23:18:36 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'stats'
Jan 30 23:18:36 np0005603435 python3[81897]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:18:36 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'status'
Jan 30 23:18:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:36 np0005603435 ceph-mgr[80738]: mgr[py] Loading python module 'telegraf'
Jan 30 23:18:36 np0005603435 podman[81923]: 2026-01-31 04:18:36.330600066 +0000 UTC m=+0.134202263 container died bd0b9558020bad42b45589f62219c698b58ed75204bf00c3b27c884dc2e25f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:18:36 np0005603435 podman[81931]: 2026-01-31 04:18:36.32097572 +0000 UTC m=+0.091405238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:18:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b27b2120b774c687ce013ed379997b014dad9b191b17143a3183dc1cc33ddaaa-merged.mount: Deactivated successfully.
Jan 30 23:18:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:18:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:18:36 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:18:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:18:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:18:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:18:36 np0005603435 podman[81923]: 2026-01-31 04:18:36.975395081 +0000 UTC m=+0.778997278 container remove bd0b9558020bad42b45589f62219c698b58ed75204bf00c3b27c884dc2e25f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:18:36 np0005603435 bash[81923]: ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-uknvyv
Jan 30 23:18:36 np0005603435 systemd[1]: ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@mgr.compute-0.uknvyv.service: Main process exited, code=exited, status=143/n/a
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: Added host compute-0
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: Saving service mon spec with placement compute-0
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: Saving service mgr spec with placement compute-0
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: Saving service osd.default_drive_group spec with placement compute-0
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: Removing daemon mgr.compute-0.uknvyv from compute-0 -- ports [8765]
Jan 30 23:18:37 np0005603435 podman[81931]: 2026-01-31 04:18:37.122066796 +0000 UTC m=+0.892496324 container create 9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64 (image=quay.io/ceph/ceph:v20, name=confident_ardinghelli, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:37 np0005603435 systemd[1]: Started libpod-conmon-9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64.scope.
Jan 30 23:18:37 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f508690b76430a75a13418e2c0d650abb209dc6dbfd3eafb063aa7764cdaf72/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f508690b76430a75a13418e2c0d650abb209dc6dbfd3eafb063aa7764cdaf72/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f508690b76430a75a13418e2c0d650abb209dc6dbfd3eafb063aa7764cdaf72/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:37 np0005603435 podman[81931]: 2026-01-31 04:18:37.326953169 +0000 UTC m=+1.097382747 container init 9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64 (image=quay.io/ceph/ceph:v20, name=confident_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:37 np0005603435 podman[81931]: 2026-01-31 04:18:37.336872882 +0000 UTC m=+1.107302410 container start 9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64 (image=quay.io/ceph/ceph:v20, name=confident_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:37 np0005603435 podman[81931]: 2026-01-31 04:18:37.359267698 +0000 UTC m=+1.129697316 container attach 9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64 (image=quay.io/ceph/ceph:v20, name=confident_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:18:37 np0005603435 systemd[1]: ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@mgr.compute-0.uknvyv.service: Failed with result 'exit-code'.
Jan 30 23:18:37 np0005603435 systemd[1]: Stopped Ceph mgr.compute-0.uknvyv for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:18:37 np0005603435 systemd[1]: ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@mgr.compute-0.uknvyv.service: Consumed 6.521s CPU time, 395.6M memory peak, read 0B from disk, written 171.0K to disk.
Jan 30 23:18:37 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:37 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:37 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:37 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.uknvyv
Jan 30 23:18:37 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.uknvyv
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.uknvyv"} v 0)
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.uknvyv"} : dispatch
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.uknvyv"}]': finished
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:37 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 269cbc23-01a6-4866-becf-c0bf0684d68a (Updating mgr deployment (-1 -> 1))
Jan 30 23:18:37 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 269cbc23-01a6-4866-becf-c0bf0684d68a (Updating mgr deployment (-1 -> 1)) in 2 seconds
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 30 23:18:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945554926' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 30 23:18:37 np0005603435 confident_ardinghelli[81995]: 
Jan 30 23:18:37 np0005603435 confident_ardinghelli[81995]: {"fsid":"95d2f419-0dd0-56f2-a094-353f8c7597ed","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":49,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-31T04:17:45:671431+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T04:17:45.674142+0000","services":{}},"progress_events":{"269cbc23-01a6-4866-becf-c0bf0684d68a":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 30 23:18:37 np0005603435 systemd[1]: libpod-9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64.scope: Deactivated successfully.
Jan 30 23:18:37 np0005603435 podman[81931]: 2026-01-31 04:18:37.961122653 +0000 UTC m=+1.731552161 container died 9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64 (image=quay.io/ceph/ceph:v20, name=confident_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:18:37 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9f508690b76430a75a13418e2c0d650abb209dc6dbfd3eafb063aa7764cdaf72-merged.mount: Deactivated successfully.
Jan 30 23:18:38 np0005603435 podman[81931]: 2026-01-31 04:18:38.004653945 +0000 UTC m=+1.775083453 container remove 9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64 (image=quay.io/ceph/ceph:v20, name=confident_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:38 np0005603435 systemd[1]: libpod-conmon-9599a86f1f682c24580c687a92206607ca2c5750b62979063351f047a163ef64.scope: Deactivated successfully.
Jan 30 23:18:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.uknvyv"} : dispatch
Jan 30 23:18:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.uknvyv"}]': finished
Jan 30 23:18:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:18:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:38 np0005603435 podman[82140]: 2026-01-31 04:18:38.369509755 +0000 UTC m=+0.060428920 container create 7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:38 np0005603435 systemd[1]: Started libpod-conmon-7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab.scope.
Jan 30 23:18:38 np0005603435 podman[82140]: 2026-01-31 04:18:38.342601383 +0000 UTC m=+0.033520588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:38 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:38 np0005603435 podman[82140]: 2026-01-31 04:18:38.465296255 +0000 UTC m=+0.156215470 container init 7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:38 np0005603435 podman[82140]: 2026-01-31 04:18:38.474526492 +0000 UTC m=+0.165445657 container start 7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:38 np0005603435 podman[82140]: 2026-01-31 04:18:38.478748411 +0000 UTC m=+0.169667576 container attach 7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:18:38 np0005603435 romantic_haslett[82156]: 167 167
Jan 30 23:18:38 np0005603435 systemd[1]: libpod-7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab.scope: Deactivated successfully.
Jan 30 23:18:38 np0005603435 podman[82140]: 2026-01-31 04:18:38.482689154 +0000 UTC m=+0.173608319 container died 7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:38 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5b9cfba3880eb3ed34debe2924ba5b2f2939d0f2f36db56721ea3e93d7d5a9b2-merged.mount: Deactivated successfully.
Jan 30 23:18:38 np0005603435 podman[82140]: 2026-01-31 04:18:38.52808931 +0000 UTC m=+0.219008465 container remove 7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 30 23:18:38 np0005603435 systemd[1]: libpod-conmon-7b798ff05b9e577cb1f45c934ca59126f5f000582d26a8e00ac56fb1004a14ab.scope: Deactivated successfully.
Jan 30 23:18:38 np0005603435 podman[82181]: 2026-01-31 04:18:38.721470412 +0000 UTC m=+0.061472395 container create c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:18:38 np0005603435 systemd[1]: Started libpod-conmon-c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4.scope.
Jan 30 23:18:38 np0005603435 podman[82181]: 2026-01-31 04:18:38.694855917 +0000 UTC m=+0.034857960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:38 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d4961a84b0abc517f8fb1ca1478f7610ba84712d1f73127133ce7d8e54256da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d4961a84b0abc517f8fb1ca1478f7610ba84712d1f73127133ce7d8e54256da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d4961a84b0abc517f8fb1ca1478f7610ba84712d1f73127133ce7d8e54256da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d4961a84b0abc517f8fb1ca1478f7610ba84712d1f73127133ce7d8e54256da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d4961a84b0abc517f8fb1ca1478f7610ba84712d1f73127133ce7d8e54256da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:38 np0005603435 podman[82181]: 2026-01-31 04:18:38.829290315 +0000 UTC m=+0.169292288 container init c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:18:38 np0005603435 podman[82181]: 2026-01-31 04:18:38.844589984 +0000 UTC m=+0.184591967 container start c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:18:38 np0005603435 podman[82181]: 2026-01-31 04:18:38.849332985 +0000 UTC m=+0.189334978 container attach c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:38 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:39 np0005603435 ceph-mon[75307]: Removing key for mgr.compute-0.uknvyv
Jan 30 23:18:39 np0005603435 loving_morse[82197]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:18:39 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:39 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:39 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 67a07621-a454-4b93-966d-529cdb301722
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "67a07621-a454-4b93-966d-529cdb301722"} v 0)
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2795636475' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "67a07621-a454-4b93-966d-529cdb301722"} : dispatch
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2795636475' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "67a07621-a454-4b93-966d-529cdb301722"}]': finished
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:40 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2795636475' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "67a07621-a454-4b93-966d-529cdb301722"} : dispatch
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2795636475' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "67a07621-a454-4b93-966d-529cdb301722"}]': finished
Jan 30 23:18:40 np0005603435 loving_morse[82197]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 30 23:18:40 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 30 23:18:40 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 30 23:18:40 np0005603435 loving_morse[82197]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:40 np0005603435 lvm[82290]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:18:40 np0005603435 lvm[82290]: VG ceph_vg0 finished
Jan 30 23:18:40 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 30 23:18:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 30 23:18:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417535469' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 30 23:18:40 np0005603435 loving_morse[82197]: stderr: got monmap epoch 1
Jan 30 23:18:40 np0005603435 loving_morse[82197]: --> Creating keyring file for osd.0
Jan 30 23:18:40 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 30 23:18:40 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 30 23:18:40 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 67a07621-a454-4b93-966d-529cdb301722 --setuser ceph --setgroup ceph
Jan 30 23:18:40 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 30 23:18:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 30 23:18:41 np0005603435 loving_morse[82197]: stderr: 2026-01-31T04:18:40.823+0000 7fb3f99038c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 30 23:18:41 np0005603435 loving_morse[82197]: stderr: 2026-01-31T04:18:40.839+0000 7fb3f99038c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 30 23:18:41 np0005603435 loving_morse[82197]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 30 23:18:41 np0005603435 loving_morse[82197]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 30 23:18:41 np0005603435 loving_morse[82197]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:41 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2
Jan 30 23:18:41 np0005603435 ceph-mgr[75599]: [progress INFO root] Writing back 3 completed events
Jan 30 23:18:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 30 23:18:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: Cluster is now healthy
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2"} v 0)
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/88151512' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2"} : dispatch
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/88151512' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2"}]': finished
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:18:42 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:42 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:18:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:42 np0005603435 lvm[83227]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:18:42 np0005603435 lvm[83227]: VG ceph_vg1 finished
Jan 30 23:18:42 np0005603435 loving_morse[82197]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 30 23:18:42 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 30 23:18:42 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 30 23:18:42 np0005603435 loving_morse[82197]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:42 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 30 23:18:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996187481' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 30 23:18:42 np0005603435 loving_morse[82197]: stderr: got monmap epoch 1
Jan 30 23:18:42 np0005603435 loving_morse[82197]: --> Creating keyring file for osd.1
Jan 30 23:18:42 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 30 23:18:42 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 30 23:18:42 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2 --setuser ceph --setgroup ceph
Jan 30 23:18:42 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:43 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/88151512' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2"} : dispatch
Jan 30 23:18:43 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/88151512' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2"}]': finished
Jan 30 23:18:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:43 np0005603435 loving_morse[82197]: stderr: 2026-01-31T04:18:42.956+0000 7f102997b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 30 23:18:43 np0005603435 loving_morse[82197]: stderr: 2026-01-31T04:18:42.971+0000 7f102997b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 30 23:18:43 np0005603435 loving_morse[82197]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 30 23:18:43 np0005603435 loving_morse[82197]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 30 23:18:43 np0005603435 loving_morse[82197]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:43 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4ecd8bd6-f445-4b7a-858d-58ed6f88b29e
Jan 30 23:18:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e"} v 0)
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1211651177' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e"} : dispatch
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1211651177' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e"}]': finished
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:18:44 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:18:44 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:18:44 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:18:44 np0005603435 loving_morse[82197]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 30 23:18:44 np0005603435 lvm[84163]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:18:44 np0005603435 lvm[84163]: VG ceph_vg2 finished
Jan 30 23:18:44 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 30 23:18:44 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 30 23:18:44 np0005603435 loving_morse[82197]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 30 23:18:44 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 30 23:18:44 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 30 23:18:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659957782' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 30 23:18:45 np0005603435 loving_morse[82197]: stderr: got monmap epoch 1
Jan 30 23:18:45 np0005603435 loving_morse[82197]: --> Creating keyring file for osd.2
Jan 30 23:18:45 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 30 23:18:45 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 30 23:18:45 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 4ecd8bd6-f445-4b7a-858d-58ed6f88b29e --setuser ceph --setgroup ceph
Jan 30 23:18:45 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1211651177' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e"} : dispatch
Jan 30 23:18:45 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1211651177' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e"}]': finished
Jan 30 23:18:46 np0005603435 loving_morse[82197]: stderr: 2026-01-31T04:18:45.341+0000 7f687495d8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 30 23:18:46 np0005603435 loving_morse[82197]: stderr: 2026-01-31T04:18:45.362+0000 7f687495d8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 30 23:18:46 np0005603435 loving_morse[82197]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 30 23:18:46 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 30 23:18:46 np0005603435 loving_morse[82197]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 30 23:18:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:46 np0005603435 loving_morse[82197]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 30 23:18:46 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 30 23:18:46 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 30 23:18:46 np0005603435 loving_morse[82197]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 30 23:18:46 np0005603435 loving_morse[82197]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 30 23:18:46 np0005603435 loving_morse[82197]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 30 23:18:46 np0005603435 systemd[1]: libpod-c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4.scope: Deactivated successfully.
Jan 30 23:18:46 np0005603435 systemd[1]: libpod-c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4.scope: Consumed 5.939s CPU time.
Jan 30 23:18:46 np0005603435 podman[85075]: 2026-01-31 04:18:46.439750548 +0000 UTC m=+0.028810128 container died c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 30 23:18:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8d4961a84b0abc517f8fb1ca1478f7610ba84712d1f73127133ce7d8e54256da-merged.mount: Deactivated successfully.
Jan 30 23:18:46 np0005603435 podman[85075]: 2026-01-31 04:18:46.497091435 +0000 UTC m=+0.086151005 container remove c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:18:46 np0005603435 systemd[1]: libpod-conmon-c537a6ff5b54b257548dbfda68a2ba1cd0f0c9dbb20aab5b91d644a4baee34c4.scope: Deactivated successfully.
Jan 30 23:18:46 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:47 np0005603435 podman[85153]: 2026-01-31 04:18:47.013918764 +0000 UTC m=+0.052968885 container create bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 30 23:18:47 np0005603435 systemd[1]: Started libpod-conmon-bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152.scope.
Jan 30 23:18:47 np0005603435 podman[85153]: 2026-01-31 04:18:46.989380118 +0000 UTC m=+0.028430289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:47 np0005603435 podman[85153]: 2026-01-31 04:18:47.109193472 +0000 UTC m=+0.148243583 container init bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:18:47 np0005603435 podman[85153]: 2026-01-31 04:18:47.11633613 +0000 UTC m=+0.155386211 container start bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:18:47 np0005603435 podman[85153]: 2026-01-31 04:18:47.119874853 +0000 UTC m=+0.158924964 container attach bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bose, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:47 np0005603435 dazzling_bose[85169]: 167 167
Jan 30 23:18:47 np0005603435 systemd[1]: libpod-bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152.scope: Deactivated successfully.
Jan 30 23:18:47 np0005603435 podman[85153]: 2026-01-31 04:18:47.123494548 +0000 UTC m=+0.162544659 container died bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bose, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ef6fa1155df6d066e59dd612c904bbf555e7d0c0e89a11723220eea434da278a-merged.mount: Deactivated successfully.
Jan 30 23:18:47 np0005603435 podman[85153]: 2026-01-31 04:18:47.169833536 +0000 UTC m=+0.208883657 container remove bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:18:47 np0005603435 systemd[1]: libpod-conmon-bb5e7beb6cf6ec1cfb11fd94c0b0e88f75975b8e1bee01706ef69d9c85e57152.scope: Deactivated successfully.
Jan 30 23:18:47 np0005603435 podman[85193]: 2026-01-31 04:18:47.332403775 +0000 UTC m=+0.040739148 container create ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_margulis, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:47 np0005603435 systemd[1]: Started libpod-conmon-ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331.scope.
Jan 30 23:18:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9398bc6bab768f29b36910ee1fe73185232b58d589ce3f3a03a314fe1a0c53c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9398bc6bab768f29b36910ee1fe73185232b58d589ce3f3a03a314fe1a0c53c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9398bc6bab768f29b36910ee1fe73185232b58d589ce3f3a03a314fe1a0c53c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9398bc6bab768f29b36910ee1fe73185232b58d589ce3f3a03a314fe1a0c53c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:47 np0005603435 podman[85193]: 2026-01-31 04:18:47.31602248 +0000 UTC m=+0.024357863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:47 np0005603435 podman[85193]: 2026-01-31 04:18:47.417885432 +0000 UTC m=+0.126220885 container init ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:18:47 np0005603435 podman[85193]: 2026-01-31 04:18:47.425651575 +0000 UTC m=+0.133986978 container start ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:18:47 np0005603435 podman[85193]: 2026-01-31 04:18:47.429425943 +0000 UTC m=+0.137761346 container attach ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_margulis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:47 np0005603435 boring_margulis[85209]: {
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:    "0": [
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:        {
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "devices": [
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "/dev/loop3"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            ],
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_name": "ceph_lv0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_size": "21470642176",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "name": "ceph_lv0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "tags": {
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cluster_name": "ceph",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.crush_device_class": "",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.encrypted": "0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.objectstore": "bluestore",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osd_id": "0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.type": "block",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.vdo": "0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.with_tpm": "0"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            },
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "type": "block",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "vg_name": "ceph_vg0"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:        }
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:    ],
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:    "1": [
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:        {
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "devices": [
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "/dev/loop4"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            ],
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_name": "ceph_lv1",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_size": "21470642176",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "name": "ceph_lv1",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "tags": {
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cluster_name": "ceph",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.crush_device_class": "",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.encrypted": "0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.objectstore": "bluestore",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osd_id": "1",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.type": "block",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.vdo": "0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.with_tpm": "0"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            },
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "type": "block",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "vg_name": "ceph_vg1"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:        }
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:    ],
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:    "2": [
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:        {
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "devices": [
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "/dev/loop5"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            ],
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_name": "ceph_lv2",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_size": "21470642176",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "name": "ceph_lv2",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "tags": {
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.cluster_name": "ceph",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.crush_device_class": "",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.encrypted": "0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.objectstore": "bluestore",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osd_id": "2",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.type": "block",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.vdo": "0",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:                "ceph.with_tpm": "0"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            },
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "type": "block",
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:            "vg_name": "ceph_vg2"
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:        }
Jan 30 23:18:47 np0005603435 boring_margulis[85209]:    ]
Jan 30 23:18:47 np0005603435 boring_margulis[85209]: }
Jan 30 23:18:47 np0005603435 systemd[1]: libpod-ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331.scope: Deactivated successfully.
Jan 30 23:18:47 np0005603435 podman[85193]: 2026-01-31 04:18:47.757698164 +0000 UTC m=+0.466033557 container died ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f9398bc6bab768f29b36910ee1fe73185232b58d589ce3f3a03a314fe1a0c53c-merged.mount: Deactivated successfully.
Jan 30 23:18:47 np0005603435 podman[85193]: 2026-01-31 04:18:47.927166854 +0000 UTC m=+0.635502257 container remove ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:47 np0005603435 systemd[1]: libpod-conmon-ac82fec5eec37578a168d13f0b50790fbae77b4105b75c926e72a093f6615331.scope: Deactivated successfully.
Jan 30 23:18:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 30 23:18:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 30 23:18:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:48 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 30 23:18:48 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 30 23:18:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:48 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 30 23:18:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:48 np0005603435 podman[85321]: 2026-01-31 04:18:48.563852239 +0000 UTC m=+0.097447471 container create fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_varahamihira, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:18:48 np0005603435 podman[85321]: 2026-01-31 04:18:48.488978309 +0000 UTC m=+0.022573641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:48 np0005603435 systemd[1]: Started libpod-conmon-fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e.scope.
Jan 30 23:18:48 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:48 np0005603435 podman[85321]: 2026-01-31 04:18:48.734828484 +0000 UTC m=+0.268423776 container init fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:18:48 np0005603435 podman[85321]: 2026-01-31 04:18:48.742837812 +0000 UTC m=+0.276433084 container start fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 30 23:18:48 np0005603435 nostalgic_varahamihira[85337]: 167 167
Jan 30 23:18:48 np0005603435 systemd[1]: libpod-fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e.scope: Deactivated successfully.
Jan 30 23:18:48 np0005603435 podman[85321]: 2026-01-31 04:18:48.804870459 +0000 UTC m=+0.338465741 container attach fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:48 np0005603435 podman[85321]: 2026-01-31 04:18:48.806858296 +0000 UTC m=+0.340453568 container died fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_varahamihira, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3d5e2f4d557c2c666f85bdb0eea57526fd01b6e0776029981e424f7a502df3db-merged.mount: Deactivated successfully.
Jan 30 23:18:48 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:48 np0005603435 podman[85321]: 2026-01-31 04:18:48.94286214 +0000 UTC m=+0.476457402 container remove fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:48 np0005603435 systemd[1]: libpod-conmon-fe6032347a741416ccc01dcd8ba9e1307153784a3aa3d7850455cdda23d2134e.scope: Deactivated successfully.
Jan 30 23:18:49 np0005603435 podman[85369]: 2026-01-31 04:18:49.188200713 +0000 UTC m=+0.023784760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:49 np0005603435 podman[85369]: 2026-01-31 04:18:49.295715679 +0000 UTC m=+0.131299696 container create 8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:49 np0005603435 systemd[1]: Started libpod-conmon-8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e.scope.
Jan 30 23:18:49 np0005603435 ceph-mon[75307]: Deploying daemon osd.0 on compute-0
Jan 30 23:18:49 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edde892d1ffe902ae4d0beae02a5e7e08152a4b89aa06a0d84ed7d3349f74bd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edde892d1ffe902ae4d0beae02a5e7e08152a4b89aa06a0d84ed7d3349f74bd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edde892d1ffe902ae4d0beae02a5e7e08152a4b89aa06a0d84ed7d3349f74bd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edde892d1ffe902ae4d0beae02a5e7e08152a4b89aa06a0d84ed7d3349f74bd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edde892d1ffe902ae4d0beae02a5e7e08152a4b89aa06a0d84ed7d3349f74bd9/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:49 np0005603435 podman[85369]: 2026-01-31 04:18:49.505809023 +0000 UTC m=+0.341393070 container init 8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:49 np0005603435 podman[85369]: 2026-01-31 04:18:49.514416115 +0000 UTC m=+0.350000162 container start 8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:49 np0005603435 podman[85369]: 2026-01-31 04:18:49.612609261 +0000 UTC m=+0.448193278 container attach 8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:49 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test[85385]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 30 23:18:49 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test[85385]:                            [--no-systemd] [--no-tmpfs]
Jan 30 23:18:49 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test[85385]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 30 23:18:49 np0005603435 systemd[1]: libpod-8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e.scope: Deactivated successfully.
Jan 30 23:18:49 np0005603435 podman[85369]: 2026-01-31 04:18:49.713821859 +0000 UTC m=+0.549405876 container died 8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 30 23:18:49 np0005603435 systemd[1]: var-lib-containers-storage-overlay-edde892d1ffe902ae4d0beae02a5e7e08152a4b89aa06a0d84ed7d3349f74bd9-merged.mount: Deactivated successfully.
Jan 30 23:18:49 np0005603435 podman[85369]: 2026-01-31 04:18:49.97739795 +0000 UTC m=+0.812981997 container remove 8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate-test, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:18:49 np0005603435 systemd[1]: libpod-conmon-8a506204110f400189f6988dbdc7600b8dfff95b092aedf3970504cd3f96c22e.scope: Deactivated successfully.
Jan 30 23:18:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:50 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:50 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:50 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:50 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:50 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:50 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:50 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:51 np0005603435 systemd[1]: Starting Ceph osd.0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:18:51 np0005603435 podman[85546]: 2026-01-31 04:18:51.401574671 +0000 UTC m=+0.114138612 container create 2dd65c8badca91a03cd03556bba658980f302c72ea9ef470d585ac082b2a8fa8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:51 np0005603435 podman[85546]: 2026-01-31 04:18:51.309806625 +0000 UTC m=+0.022370616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:51 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:51 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4982114471e21ef00945448e6d4a18ddf75bfe99513c13d4c83e4bdb4c9a12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:51 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4982114471e21ef00945448e6d4a18ddf75bfe99513c13d4c83e4bdb4c9a12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:51 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4982114471e21ef00945448e6d4a18ddf75bfe99513c13d4c83e4bdb4c9a12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:51 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4982114471e21ef00945448e6d4a18ddf75bfe99513c13d4c83e4bdb4c9a12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:51 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4982114471e21ef00945448e6d4a18ddf75bfe99513c13d4c83e4bdb4c9a12/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:51 np0005603435 podman[85546]: 2026-01-31 04:18:51.507839427 +0000 UTC m=+0.220403468 container init 2dd65c8badca91a03cd03556bba658980f302c72ea9ef470d585ac082b2a8fa8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 30 23:18:51 np0005603435 podman[85546]: 2026-01-31 04:18:51.52120163 +0000 UTC m=+0.233765621 container start 2dd65c8badca91a03cd03556bba658980f302c72ea9ef470d585ac082b2a8fa8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:18:51 np0005603435 podman[85546]: 2026-01-31 04:18:51.525150353 +0000 UTC m=+0.237714384 container attach 2dd65c8badca91a03cd03556bba658980f302c72ea9ef470d585ac082b2a8fa8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:18:51 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:51 np0005603435 bash[85546]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:51 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:51 np0005603435 bash[85546]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:52 np0005603435 lvm[85646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:18:52 np0005603435 lvm[85649]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:18:52 np0005603435 lvm[85649]: VG ceph_vg2 finished
Jan 30 23:18:52 np0005603435 lvm[85646]: VG ceph_vg0 finished
Jan 30 23:18:52 np0005603435 lvm[85648]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:18:52 np0005603435 lvm[85648]: VG ceph_vg1 finished
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:52 np0005603435 bash[85546]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 30 23:18:52 np0005603435 bash[85546]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:52 np0005603435 bash[85546]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 30 23:18:52 np0005603435 bash[85546]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 30 23:18:52 np0005603435 bash[85546]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:52 np0005603435 bash[85546]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:52 np0005603435 bash[85546]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 30 23:18:52 np0005603435 bash[85546]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 30 23:18:52 np0005603435 bash[85546]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 30 23:18:52 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate[85561]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 30 23:18:52 np0005603435 bash[85546]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 30 23:18:52 np0005603435 systemd[1]: libpod-2dd65c8badca91a03cd03556bba658980f302c72ea9ef470d585ac082b2a8fa8.scope: Deactivated successfully.
Jan 30 23:18:52 np0005603435 podman[85546]: 2026-01-31 04:18:52.774932257 +0000 UTC m=+1.487496198 container died 2dd65c8badca91a03cd03556bba658980f302c72ea9ef470d585ac082b2a8fa8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:18:52 np0005603435 systemd[1]: libpod-2dd65c8badca91a03cd03556bba658980f302c72ea9ef470d585ac082b2a8fa8.scope: Consumed 1.692s CPU time.
Jan 30 23:18:52 np0005603435 systemd[1]: var-lib-containers-storage-overlay-dc4982114471e21ef00945448e6d4a18ddf75bfe99513c13d4c83e4bdb4c9a12-merged.mount: Deactivated successfully.
Jan 30 23:18:52 np0005603435 podman[85546]: 2026-01-31 04:18:52.826428477 +0000 UTC m=+1.538992448 container remove 2dd65c8badca91a03cd03556bba658980f302c72ea9ef470d585ac082b2a8fa8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:18:52 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:53 np0005603435 podman[85802]: 2026-01-31 04:18:53.077558245 +0000 UTC m=+0.058870803 container create e15c140694821fe98b8c8333e92b69acb806ef8aa9fc9a8c0a28fcbb90df2569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:18:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74d04901d1b707a5e8e49adcffc2f843bde9878ee08bcd981e4c15b4a8ecf2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74d04901d1b707a5e8e49adcffc2f843bde9878ee08bcd981e4c15b4a8ecf2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74d04901d1b707a5e8e49adcffc2f843bde9878ee08bcd981e4c15b4a8ecf2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74d04901d1b707a5e8e49adcffc2f843bde9878ee08bcd981e4c15b4a8ecf2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74d04901d1b707a5e8e49adcffc2f843bde9878ee08bcd981e4c15b4a8ecf2c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:53 np0005603435 podman[85802]: 2026-01-31 04:18:53.053152012 +0000 UTC m=+0.034464630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:53 np0005603435 podman[85802]: 2026-01-31 04:18:53.166333651 +0000 UTC m=+0.147646239 container init e15c140694821fe98b8c8333e92b69acb806ef8aa9fc9a8c0a28fcbb90df2569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 30 23:18:53 np0005603435 podman[85802]: 2026-01-31 04:18:53.181686511 +0000 UTC m=+0.162999049 container start e15c140694821fe98b8c8333e92b69acb806ef8aa9fc9a8c0a28fcbb90df2569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:18:53 np0005603435 bash[85802]: e15c140694821fe98b8c8333e92b69acb806ef8aa9fc9a8c0a28fcbb90df2569
Jan 30 23:18:53 np0005603435 systemd[1]: Started Ceph osd.0 for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: pidfile_write: ignore empty --pid-file
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:53 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 30 23:18:53 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456400 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c456000 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: load: jerasure load: lrc 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375c457c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount shared_bdev_used = 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: RocksDB version: 7.9.2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Git sha 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: DB SUMMARY
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: DB Session ID:  GRQ1CUAFTJ75M7AQJF7F
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: CURRENT file:  CURRENT
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: IDENTITY file:  IDENTITY
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                         Options.error_if_exists: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.create_if_missing: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                         Options.paranoid_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                                     Options.env: 0x56375c2e7ea0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                                Options.info_log: 0x56375d3388a0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_file_opening_threads: 16
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                              Options.statistics: (nil)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.use_fsync: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.max_log_file_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                         Options.allow_fallocate: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.use_direct_reads: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.create_missing_column_families: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                              Options.db_log_dir: 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                                 Options.wal_dir: db.wal
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.advise_random_on_open: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.write_buffer_manager: 0x56375d1e4b40
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                            Options.rate_limiter: (nil)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.unordered_write: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.row_cache: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                              Options.wal_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.allow_ingest_behind: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.two_write_queues: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.manual_wal_flush: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.wal_compression: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.atomic_flush: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.log_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.allow_data_in_errors: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.db_host_id: __hostname__
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.max_background_jobs: 4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.max_background_compactions: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.max_subcompactions: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.max_open_files: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.bytes_per_sync: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.max_background_flushes: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Compression algorithms supported:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kZSTD supported: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kXpressCompression supported: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kBZip2Compression supported: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kLZ4Compression supported: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kZlibCompression supported: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kSnappyCompression supported: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d338c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eba30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 831514ed-31bd-4010-976f-658666cefc6c
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833133589497, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833133590813, "job": 1, "event": "recovery_finished"}
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: freelist init
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: freelist _read_cfg
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs umount
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) close
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bdev(0x56375d0ed800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluefs mount shared_bdev_used = 27262976
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: RocksDB version: 7.9.2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Git sha 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: DB SUMMARY
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: DB Session ID:  GRQ1CUAFTJ75M7AQJF7E
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: CURRENT file:  CURRENT
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: IDENTITY file:  IDENTITY
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                         Options.error_if_exists: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.create_if_missing: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                         Options.paranoid_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                                     Options.env: 0x56375c2e7ab0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                                Options.info_log: 0x56375d338960
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_file_opening_threads: 16
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                              Options.statistics: (nil)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.use_fsync: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.max_log_file_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                         Options.allow_fallocate: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.use_direct_reads: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.create_missing_column_families: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                              Options.db_log_dir: 
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                                 Options.wal_dir: db.wal
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.advise_random_on_open: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.write_buffer_manager: 0x56375d1e5900
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                            Options.rate_limiter: (nil)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.unordered_write: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.row_cache: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                              Options.wal_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.allow_ingest_behind: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.two_write_queues: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.manual_wal_flush: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.wal_compression: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.atomic_flush: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.log_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.allow_data_in_errors: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.db_host_id: __hostname__
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.max_background_jobs: 4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.max_background_compactions: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.max_subcompactions: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.max_open_files: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.bytes_per_sync: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.max_background_flushes: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Compression algorithms supported:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kZSTD supported: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kXpressCompression supported: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kBZip2Compression supported: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kLZ4Compression supported: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kZlibCompression supported: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: #011kSnappyCompression supported: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56375d339f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56375c2eb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 831514ed-31bd-4010-976f-658666cefc6c
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833133666315, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833133672402, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833133, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "831514ed-31bd-4010-976f-658666cefc6c", "db_session_id": "GRQ1CUAFTJ75M7AQJF7E", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833133676266, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833133, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "831514ed-31bd-4010-976f-658666cefc6c", "db_session_id": "GRQ1CUAFTJ75M7AQJF7E", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833133679921, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833133, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "831514ed-31bd-4010-976f-658666cefc6c", "db_session_id": "GRQ1CUAFTJ75M7AQJF7E", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833133681689, "job": 1, "event": "recovery_finished"}
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56375d540000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: DB pointer 0x56375d4f2000
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56375c2eb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56375c2eb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: _get_class not permitted to load lua
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: _get_class not permitted to load sdk
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: osd.0 0 load_pgs
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: osd.0 0 load_pgs opened 0 pgs
Jan 30 23:18:53 np0005603435 ceph-osd[85822]: osd.0 0 log_to_monitors true
Jan 30 23:18:53 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0[85818]: 2026-01-31T04:18:53.709+0000 7f268465e8c0 -1 osd.0 0 log_to_monitors true
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 30 23:18:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 30 23:18:53 np0005603435 podman[86364]: 2026-01-31 04:18:53.877487264 +0000 UTC m=+0.057341238 container create 47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_colden, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:53 np0005603435 systemd[1]: Started libpod-conmon-47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51.scope.
Jan 30 23:18:53 np0005603435 podman[86364]: 2026-01-31 04:18:53.856313687 +0000 UTC m=+0.036167661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:53 np0005603435 podman[86364]: 2026-01-31 04:18:53.976476829 +0000 UTC m=+0.156330863 container init 47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_colden, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:53 np0005603435 podman[86364]: 2026-01-31 04:18:53.986561956 +0000 UTC m=+0.166415950 container start 47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:18:53 np0005603435 eager_colden[86380]: 167 167
Jan 30 23:18:53 np0005603435 systemd[1]: libpod-47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51.scope: Deactivated successfully.
Jan 30 23:18:53 np0005603435 podman[86364]: 2026-01-31 04:18:53.996209123 +0000 UTC m=+0.176063167 container attach 47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_colden, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:18:53 np0005603435 podman[86364]: 2026-01-31 04:18:53.998401054 +0000 UTC m=+0.178255038 container died 47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-23e56deec0fc150a090d4dc5bb9fc1c11f8c3659e5f75665383af236c5f9f5a4-merged.mount: Deactivated successfully.
Jan 30 23:18:54 np0005603435 podman[86364]: 2026-01-31 04:18:54.05528783 +0000 UTC m=+0.235141824 container remove 47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:18:54 np0005603435 systemd[1]: libpod-conmon-47b3c2ffcd44bdc3967a562f1b11df56e9f73efc300d4de3a730fe90063c3a51.scope: Deactivated successfully.
Jan 30 23:18:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 30 23:18:54 np0005603435 podman[86412]: 2026-01-31 04:18:54.359986817 +0000 UTC m=+0.065216233 container create 3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:18:54 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: Deploying daemon osd.1 on compute-0
Jan 30 23:18:54 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:18:54 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:18:54 np0005603435 ceph-mon[75307]: from='osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 30 23:18:54 np0005603435 systemd[1]: Started libpod-conmon-3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191.scope.
Jan 30 23:18:54 np0005603435 podman[86412]: 2026-01-31 04:18:54.332776748 +0000 UTC m=+0.038006244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65ac1568a27c2ca690351b1954152f92745a158b7af59c8d145f006f08a6f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65ac1568a27c2ca690351b1954152f92745a158b7af59c8d145f006f08a6f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65ac1568a27c2ca690351b1954152f92745a158b7af59c8d145f006f08a6f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65ac1568a27c2ca690351b1954152f92745a158b7af59c8d145f006f08a6f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c65ac1568a27c2ca690351b1954152f92745a158b7af59c8d145f006f08a6f6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:54 np0005603435 podman[86412]: 2026-01-31 04:18:54.478601663 +0000 UTC m=+0.183831159 container init 3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:18:54 np0005603435 podman[86412]: 2026-01-31 04:18:54.495212563 +0000 UTC m=+0.200442009 container start 3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:18:54 np0005603435 podman[86412]: 2026-01-31 04:18:54.501320817 +0000 UTC m=+0.206550263 container attach 3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 30 23:18:54 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test[86429]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 30 23:18:54 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test[86429]:                            [--no-systemd] [--no-tmpfs]
Jan 30 23:18:54 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test[86429]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 30 23:18:54 np0005603435 systemd[1]: libpod-3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191.scope: Deactivated successfully.
Jan 30 23:18:54 np0005603435 podman[86412]: 2026-01-31 04:18:54.696524542 +0000 UTC m=+0.401753998 container died 3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 30 23:18:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4c65ac1568a27c2ca690351b1954152f92745a158b7af59c8d145f006f08a6f6-merged.mount: Deactivated successfully.
Jan 30 23:18:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 30 23:18:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 30 23:18:54 np0005603435 podman[86412]: 2026-01-31 04:18:54.756447989 +0000 UTC m=+0.461677415 container remove 3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:54 np0005603435 systemd[1]: libpod-conmon-3a4c424094d2846074b02a89de981b1fafcf29109daece67b9d00c92d60aa191.scope: Deactivated successfully.
Jan 30 23:18:54 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:55 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:55 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:55 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:18:55 np0005603435 systemd[1]: Reloading.
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 30 23:18:55 np0005603435 ceph-osd[85822]: osd.0 0 done with init, starting boot process
Jan 30 23:18:55 np0005603435 ceph-osd[85822]: osd.0 0 start_boot
Jan 30 23:18:55 np0005603435 ceph-osd[85822]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 30 23:18:55 np0005603435 ceph-osd[85822]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 30 23:18:55 np0005603435 ceph-osd[85822]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 30 23:18:55 np0005603435 ceph-osd[85822]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 30 23:18:55 np0005603435 ceph-osd[85822]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:18:55 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:18:55 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:18:55 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:18:55 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2431324675; not ready for session (expect reconnect)
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:55 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: from='osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: from='osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 30 23:18:55 np0005603435 ceph-mon[75307]: from='osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 30 23:18:55 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:18:55 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:18:55 np0005603435 systemd[1]: Starting Ceph osd.1 for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:18:55 np0005603435 podman[86587]: 2026-01-31 04:18:55.905360504 +0000 UTC m=+0.073263662 container create cd47b070f610017a90f3f4ae683026ca82b44e4eb2b222642d2803dc8a4e12a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 30 23:18:55 np0005603435 podman[86587]: 2026-01-31 04:18:55.858983155 +0000 UTC m=+0.026886353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:56 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244ccfabe609434cb7cdf1b77d439e44194e652d3b629c62b7ab55c5ec8f797a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244ccfabe609434cb7cdf1b77d439e44194e652d3b629c62b7ab55c5ec8f797a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244ccfabe609434cb7cdf1b77d439e44194e652d3b629c62b7ab55c5ec8f797a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244ccfabe609434cb7cdf1b77d439e44194e652d3b629c62b7ab55c5ec8f797a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244ccfabe609434cb7cdf1b77d439e44194e652d3b629c62b7ab55c5ec8f797a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:56 np0005603435 podman[86587]: 2026-01-31 04:18:56.080045487 +0000 UTC m=+0.247948725 container init cd47b070f610017a90f3f4ae683026ca82b44e4eb2b222642d2803dc8a4e12a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 30 23:18:56 np0005603435 podman[86587]: 2026-01-31 04:18:56.092174672 +0000 UTC m=+0.260077820 container start cd47b070f610017a90f3f4ae683026ca82b44e4eb2b222642d2803dc8a4e12a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:56 np0005603435 podman[86587]: 2026-01-31 04:18:56.124548582 +0000 UTC m=+0.292451770 container attach cd47b070f610017a90f3f4ae683026ca82b44e4eb2b222642d2803dc8a4e12a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:18:56 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:56 np0005603435 bash[86587]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:56 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:56 np0005603435 bash[86587]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:56 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2431324675; not ready for session (expect reconnect)
Jan 30 23:18:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:56 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:56 np0005603435 lvm[86687]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:18:56 np0005603435 lvm[86687]: VG ceph_vg0 finished
Jan 30 23:18:56 np0005603435 lvm[86688]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:18:56 np0005603435 lvm[86688]: VG ceph_vg1 finished
Jan 30 23:18:56 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:56 np0005603435 lvm[86690]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:18:56 np0005603435 lvm[86690]: VG ceph_vg2 finished
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:57 np0005603435 bash[86587]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 30 23:18:57 np0005603435 bash[86587]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:57 np0005603435 bash[86587]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 30 23:18:57 np0005603435 bash[86587]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 30 23:18:57 np0005603435 bash[86587]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:57 np0005603435 bash[86587]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:57 np0005603435 bash[86587]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 30 23:18:57 np0005603435 bash[86587]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 30 23:18:57 np0005603435 bash[86587]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 30 23:18:57 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate[86600]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 30 23:18:57 np0005603435 bash[86587]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 30 23:18:57 np0005603435 systemd[1]: libpod-cd47b070f610017a90f3f4ae683026ca82b44e4eb2b222642d2803dc8a4e12a2.scope: Deactivated successfully.
Jan 30 23:18:57 np0005603435 podman[86587]: 2026-01-31 04:18:57.284829355 +0000 UTC m=+1.452732533 container died cd47b070f610017a90f3f4ae683026ca82b44e4eb2b222642d2803dc8a4e12a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:57 np0005603435 systemd[1]: libpod-cd47b070f610017a90f3f4ae683026ca82b44e4eb2b222642d2803dc8a4e12a2.scope: Consumed 1.487s CPU time.
Jan 30 23:18:57 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2431324675; not ready for session (expect reconnect)
Jan 30 23:18:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:57 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:57 np0005603435 systemd[1]: var-lib-containers-storage-overlay-244ccfabe609434cb7cdf1b77d439e44194e652d3b629c62b7ab55c5ec8f797a-merged.mount: Deactivated successfully.
Jan 30 23:18:57 np0005603435 podman[86587]: 2026-01-31 04:18:57.493859845 +0000 UTC m=+1.661763003 container remove cd47b070f610017a90f3f4ae683026ca82b44e4eb2b222642d2803dc8a4e12a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:18:57 np0005603435 podman[86854]: 2026-01-31 04:18:57.776120655 +0000 UTC m=+0.077252136 container create de3d845254e3774a2800d153a99f1aa8eb20d1cede5bc5b8324710ea4baabe3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:18:57 np0005603435 podman[86854]: 2026-01-31 04:18:57.736093165 +0000 UTC m=+0.037224636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5957b47d57731b7bf8f8054b17acfb9f50fdd927d58236f05cf1784a981959a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5957b47d57731b7bf8f8054b17acfb9f50fdd927d58236f05cf1784a981959a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5957b47d57731b7bf8f8054b17acfb9f50fdd927d58236f05cf1784a981959a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5957b47d57731b7bf8f8054b17acfb9f50fdd927d58236f05cf1784a981959a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5957b47d57731b7bf8f8054b17acfb9f50fdd927d58236f05cf1784a981959a3/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:18:57 np0005603435 podman[86854]: 2026-01-31 04:18:57.898022178 +0000 UTC m=+0.199153659 container init de3d845254e3774a2800d153a99f1aa8eb20d1cede5bc5b8324710ea4baabe3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:18:57 np0005603435 podman[86854]: 2026-01-31 04:18:57.903505397 +0000 UTC m=+0.204636848 container start de3d845254e3774a2800d153a99f1aa8eb20d1cede5bc5b8324710ea4baabe3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: pidfile_write: ignore empty --pid-file
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:57 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:57 np0005603435 bash[86854]: de3d845254e3774a2800d153a99f1aa8eb20d1cede5bc5b8324710ea4baabe3b
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 systemd[1]: Started Ceph osd.1 for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238400 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819238000 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: load: jerasure load: lrc 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819239c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount shared_bdev_used = 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: RocksDB version: 7.9.2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Git sha 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: DB SUMMARY
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: DB Session ID:  QBDGL7D2EP1J9WAW63OU
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: CURRENT file:  CURRENT
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: IDENTITY file:  IDENTITY
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                         Options.error_if_exists: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.create_if_missing: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                         Options.paranoid_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                                     Options.env: 0x55b8190c9ea0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                                Options.info_log: 0x55b81a11c8a0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_file_opening_threads: 16
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                              Options.statistics: (nil)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.use_fsync: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.max_log_file_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                         Options.allow_fallocate: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.use_direct_reads: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.create_missing_column_families: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                              Options.db_log_dir: 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                                 Options.wal_dir: db.wal
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.advise_random_on_open: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.write_buffer_manager: 0x55b81912eb40
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                            Options.rate_limiter: (nil)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.unordered_write: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.row_cache: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                              Options.wal_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.allow_ingest_behind: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.two_write_queues: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.manual_wal_flush: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.wal_compression: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.atomic_flush: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.log_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.allow_data_in_errors: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.db_host_id: __hostname__
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.max_background_jobs: 4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.max_background_compactions: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.max_subcompactions: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.max_open_files: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.bytes_per_sync: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.max_background_flushes: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Compression algorithms supported:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kZSTD supported: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kXpressCompression supported: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kBZip2Compression supported: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kLZ4Compression supported: 1
Jan 30 23:18:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kZlibCompression supported: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kSnappyCompression supported: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b848c7c1-68a9-4b0c-93e6-51a1b0930306
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833138280557, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833138282372, "job": 1, "event": "recovery_finished"}
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: freelist init
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: freelist _read_cfg
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs umount
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) close
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bdev(0x55b819ecf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluefs mount shared_bdev_used = 27262976
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: RocksDB version: 7.9.2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Git sha 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: DB SUMMARY
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: DB Session ID:  QBDGL7D2EP1J9WAW63OV
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: CURRENT file:  CURRENT
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: IDENTITY file:  IDENTITY
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                         Options.error_if_exists: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.create_if_missing: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                         Options.paranoid_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                                     Options.env: 0x55b8190c9ce0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                                Options.info_log: 0x55b81a1f32a0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_file_opening_threads: 16
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                              Options.statistics: (nil)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.use_fsync: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.max_log_file_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                         Options.allow_fallocate: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.use_direct_reads: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.create_missing_column_families: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                              Options.db_log_dir: 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                                 Options.wal_dir: db.wal
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.advise_random_on_open: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.write_buffer_manager: 0x55b81912f900
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                            Options.rate_limiter: (nil)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.unordered_write: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.row_cache: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                              Options.wal_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.allow_ingest_behind: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.two_write_queues: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.manual_wal_flush: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.wal_compression: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.atomic_flush: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.log_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.allow_data_in_errors: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.db_host_id: __hostname__
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.max_background_jobs: 4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.max_background_compactions: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.max_subcompactions: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.max_open_files: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.bytes_per_sync: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.max_background_flushes: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Compression algorithms supported:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kZSTD supported: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kXpressCompression supported: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kBZip2Compression supported: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kLZ4Compression supported: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kZlibCompression supported: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: #011kSnappyCompression supported: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dee0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dee0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:           Options.merge_operator: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b81a11dee0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b8190cd4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.compression: LZ4
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.num_levels: 7
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b848c7c1-68a9-4b0c-93e6-51a1b0930306
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833138351930, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 30 23:18:58 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2431324675; not ready for session (expect reconnect)
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:58 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833138641401, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833138, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b848c7c1-68a9-4b0c-93e6-51a1b0930306", "db_session_id": "QBDGL7D2EP1J9WAW63OV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833138738671, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833138, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b848c7c1-68a9-4b0c-93e6-51a1b0930306", "db_session_id": "QBDGL7D2EP1J9WAW63OV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833138846347, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833138, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b848c7c1-68a9-4b0c-93e6-51a1b0930306", "db_session_id": "QBDGL7D2EP1J9WAW63OV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833138870730, "job": 1, "event": "recovery_finished"}
Jan 30 23:18:58 np0005603435 ceph-osd[86873]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:18:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:18:58 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 30 23:18:58 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 30 23:18:58 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b81a323c00
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: rocksdb: DB pointer 0x55b81a2d6000
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.7 total, 0.7 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.7 total, 0.7 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b8190cda30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.7 total, 0.7 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b8190cda30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.7 total, 0.7 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b8190cda30#2 capacity: 460.80 MB usag
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: _get_class not permitted to load lua
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: _get_class not permitted to load sdk
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: osd.1 0 load_pgs
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: osd.1 0 load_pgs opened 0 pgs
Jan 30 23:18:59 np0005603435 ceph-osd[86873]: osd.1 0 log_to_monitors true
Jan 30 23:18:59 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1[86869]: 2026-01-31T04:18:59.037+0000 7f6de489f8c0 -1 osd.1 0 log_to_monitors true
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 30 23:18:59 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2431324675; not ready for session (expect reconnect)
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:59 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:59 np0005603435 podman[87413]: 2026-01-31 04:18:59.441332316 +0000 UTC m=+0.072451712 container create ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:18:59 np0005603435 podman[87413]: 2026-01-31 04:18:59.410863051 +0000 UTC m=+0.041982497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:18:59 np0005603435 systemd[1]: Started libpod-conmon-ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc.scope.
Jan 30 23:18:59 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:18:59 np0005603435 podman[87413]: 2026-01-31 04:18:59.587481499 +0000 UTC m=+0.218600875 container init ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:18:59 np0005603435 podman[87413]: 2026-01-31 04:18:59.594877773 +0000 UTC m=+0.225997159 container start ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:18:59 np0005603435 focused_yalow[87429]: 167 167
Jan 30 23:18:59 np0005603435 systemd[1]: libpod-ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc.scope: Deactivated successfully.
Jan 30 23:18:59 np0005603435 conmon[87429]: conmon ae9bdcded885055b700a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc.scope/container/memory.events
Jan 30 23:18:59 np0005603435 podman[87413]: 2026-01-31 04:18:59.611051003 +0000 UTC m=+0.242170359 container attach ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:18:59 np0005603435 podman[87413]: 2026-01-31 04:18:59.611947064 +0000 UTC m=+0.243066450 container died ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 30 23:18:59 np0005603435 systemd[1]: var-lib-containers-storage-overlay-bc5fda2d914afea272a0e31cb540ea398dce1f0abef615005253ffe7a35784dd-merged.mount: Deactivated successfully.
Jan 30 23:18:59 np0005603435 podman[87413]: 2026-01-31 04:18:59.754750288 +0000 UTC m=+0.385869644 container remove ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:18:59 np0005603435 systemd[1]: libpod-conmon-ae9bdcded885055b700ab96069890ed8b37b296be760077db699261fb537c5bc.scope: Deactivated successfully.
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: Deploying daemon osd.2 on compute-0
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: from='osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:18:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:18:59 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:18:59 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:18:59 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:00 np0005603435 podman[87461]: 2026-01-31 04:19:00.032990523 +0000 UTC m=+0.043203686 container create 1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:00 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 30 23:19:00 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 30 23:19:00 np0005603435 systemd[1]: Started libpod-conmon-1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc.scope.
Jan 30 23:19:00 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9220526625dc44c9b3ad71ca75b0a86c9c90ca87cd2f6be5078c8bf9737414b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9220526625dc44c9b3ad71ca75b0a86c9c90ca87cd2f6be5078c8bf9737414b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9220526625dc44c9b3ad71ca75b0a86c9c90ca87cd2f6be5078c8bf9737414b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9220526625dc44c9b3ad71ca75b0a86c9c90ca87cd2f6be5078c8bf9737414b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9220526625dc44c9b3ad71ca75b0a86c9c90ca87cd2f6be5078c8bf9737414b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:00 np0005603435 podman[87461]: 2026-01-31 04:19:00.009939082 +0000 UTC m=+0.020152295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:00 np0005603435 podman[87461]: 2026-01-31 04:19:00.139327501 +0000 UTC m=+0.149540664 container init 1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:00 np0005603435 podman[87461]: 2026-01-31 04:19:00.14654594 +0000 UTC m=+0.156759103 container start 1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:00 np0005603435 podman[87461]: 2026-01-31 04:19:00.155143942 +0000 UTC m=+0.165357105 container attach 1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 30 23:19:00 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test[87478]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 30 23:19:00 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test[87478]:                            [--no-systemd] [--no-tmpfs]
Jan 30 23:19:00 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test[87478]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 30 23:19:00 np0005603435 systemd[1]: libpod-1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc.scope: Deactivated successfully.
Jan 30 23:19:00 np0005603435 podman[87461]: 2026-01-31 04:19:00.339150714 +0000 UTC m=+0.349363897 container died 1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:19:00 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2431324675; not ready for session (expect reconnect)
Jan 30 23:19:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:19:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:19:00 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 30 23:19:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d9220526625dc44c9b3ad71ca75b0a86c9c90ca87cd2f6be5078c8bf9737414b-merged.mount: Deactivated successfully.
Jan 30 23:19:00 np0005603435 podman[87461]: 2026-01-31 04:19:00.473758476 +0000 UTC m=+0.483971669 container remove 1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 27.061 iops: 6927.602 elapsed_sec: 0.433
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: log_channel(cluster) log [WRN] : OSD bench result of 6927.601681 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 0 waiting for initial osdmap
Jan 30 23:19:00 np0005603435 systemd[1]: libpod-conmon-1b3e140ecf4895bc1410fcf36573bc91f5c1958849e1adb63f3e6a2051217dcc.scope: Deactivated successfully.
Jan 30 23:19:00 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0[85818]: 2026-01-31T04:19:00.479+0000 7f26805e0640 -1 osd.0 0 waiting for initial osdmap
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 30 23:19:00 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-0[85818]: 2026-01-31T04:19:00.517+0000 7f267b3e5640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 30 23:19:00 np0005603435 ceph-osd[85822]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 30 23:19:00 np0005603435 systemd[1]: Reloading.
Jan 30 23:19:00 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:19:00 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:19:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 30 23:19:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:19:00 np0005603435 ceph-mgr[75599]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Jan 30 23:19:01 np0005603435 ceph-osd[86873]: osd.1 0 done with init, starting boot process
Jan 30 23:19:01 np0005603435 ceph-osd[86873]: osd.1 0 start_boot
Jan 30 23:19:01 np0005603435 ceph-osd[86873]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 30 23:19:01 np0005603435 ceph-osd[86873]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 30 23:19:01 np0005603435 ceph-osd[86873]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 30 23:19:01 np0005603435 ceph-osd[86873]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 30 23:19:01 np0005603435 ceph-osd[86873]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675] boot
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Jan 30 23:19:01 np0005603435 systemd[1]: Reloading.
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:01 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:01 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:01 np0005603435 ceph-osd[85822]: osd.0 10 state: booting -> active
Jan 30 23:19:01 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:01 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: from='osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: from='osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 30 23:19:01 np0005603435 ceph-mon[75307]: OSD bench result of 6927.601681 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 30 23:19:01 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:19:01 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:19:01 np0005603435 systemd[1]: Starting Ceph osd.2 for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:19:01 np0005603435 podman[87638]: 2026-01-31 04:19:01.596868796 +0000 UTC m=+0.060283647 container create f307e1d205b7c72aa70a7c10f1129a2afc4714de099c71865435db8c118feff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:01 np0005603435 podman[87638]: 2026-01-31 04:19:01.555434333 +0000 UTC m=+0.018849274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a4f9f1d95ec022e6c262373ae45ddbdd1215dbb9929cb83a3a1a47aaf5d101/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a4f9f1d95ec022e6c262373ae45ddbdd1215dbb9929cb83a3a1a47aaf5d101/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a4f9f1d95ec022e6c262373ae45ddbdd1215dbb9929cb83a3a1a47aaf5d101/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a4f9f1d95ec022e6c262373ae45ddbdd1215dbb9929cb83a3a1a47aaf5d101/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a4f9f1d95ec022e6c262373ae45ddbdd1215dbb9929cb83a3a1a47aaf5d101/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:01 np0005603435 podman[87638]: 2026-01-31 04:19:01.734400826 +0000 UTC m=+0.197815717 container init f307e1d205b7c72aa70a7c10f1129a2afc4714de099c71865435db8c118feff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 30 23:19:01 np0005603435 podman[87638]: 2026-01-31 04:19:01.742958957 +0000 UTC m=+0.206373818 container start f307e1d205b7c72aa70a7c10f1129a2afc4714de099c71865435db8c118feff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:01 np0005603435 podman[87638]: 2026-01-31 04:19:01.772799358 +0000 UTC m=+0.236214219 container attach f307e1d205b7c72aa70a7c10f1129a2afc4714de099c71865435db8c118feff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:19:01 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:19:01 np0005603435 bash[87638]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:19:01 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:19:01 np0005603435 bash[87638]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:19:02 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:02 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:19:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:02 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:02 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: from='osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: osd.0 [v2:192.168.122.100:6802/2431324675,v1:192.168.122.100:6803/2431324675] boot
Jan 30 23:19:02 np0005603435 lvm[87739]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:19:02 np0005603435 lvm[87739]: VG ceph_vg1 finished
Jan 30 23:19:02 np0005603435 lvm[87738]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:19:02 np0005603435 lvm[87738]: VG ceph_vg0 finished
Jan 30 23:19:02 np0005603435 lvm[87741]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:19:02 np0005603435 lvm[87741]: VG ceph_vg2 finished
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:19:02 np0005603435 bash[87638]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 30 23:19:02 np0005603435 bash[87638]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:19:02 np0005603435 bash[87638]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 30 23:19:02 np0005603435 bash[87638]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 30 23:19:02 np0005603435 bash[87638]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:02 np0005603435 bash[87638]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:02 np0005603435 bash[87638]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 30 23:19:02 np0005603435 bash[87638]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 30 23:19:02 np0005603435 bash[87638]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 30 23:19:02 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate[87653]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 30 23:19:02 np0005603435 bash[87638]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 30 23:19:02 np0005603435 systemd[1]: libpod-f307e1d205b7c72aa70a7c10f1129a2afc4714de099c71865435db8c118feff6.scope: Deactivated successfully.
Jan 30 23:19:02 np0005603435 systemd[1]: libpod-f307e1d205b7c72aa70a7c10f1129a2afc4714de099c71865435db8c118feff6.scope: Consumed 1.394s CPU time.
Jan 30 23:19:02 np0005603435 podman[87638]: 2026-01-31 04:19:02.852766844 +0000 UTC m=+1.316181705 container died f307e1d205b7c72aa70a7c10f1129a2afc4714de099c71865435db8c118feff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:02 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] creating mgr pool
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 30 23:19:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 30 23:19:02 np0005603435 systemd[1]: var-lib-containers-storage-overlay-59a4f9f1d95ec022e6c262373ae45ddbdd1215dbb9929cb83a3a1a47aaf5d101-merged.mount: Deactivated successfully.
Jan 30 23:19:03 np0005603435 podman[87638]: 2026-01-31 04:19:03.069519075 +0000 UTC m=+1.532933976 container remove f307e1d205b7c72aa70a7c10f1129a2afc4714de099c71865435db8c118feff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2-activate, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:03 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:03 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:03 np0005603435 podman[87901]: 2026-01-31 04:19:03.305556939 +0000 UTC m=+0.075834032 container create 40bfddc06ce8677d1b46af55625bed22b032d60a306c3ba775e8973fbfb85c1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:19:03 np0005603435 podman[87901]: 2026-01-31 04:19:03.257606353 +0000 UTC m=+0.027883526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12ae5a188565f166a055b7047fd5eca58c023a10e331e446fc2a3bbb5ce9968/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12ae5a188565f166a055b7047fd5eca58c023a10e331e446fc2a3bbb5ce9968/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12ae5a188565f166a055b7047fd5eca58c023a10e331e446fc2a3bbb5ce9968/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12ae5a188565f166a055b7047fd5eca58c023a10e331e446fc2a3bbb5ce9968/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12ae5a188565f166a055b7047fd5eca58c023a10e331e446fc2a3bbb5ce9968/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 30 23:19:03 np0005603435 podman[87901]: 2026-01-31 04:19:03.42906651 +0000 UTC m=+0.199343643 container init 40bfddc06ce8677d1b46af55625bed22b032d60a306c3ba775e8973fbfb85c1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:19:03 np0005603435 podman[87901]: 2026-01-31 04:19:03.435325517 +0000 UTC m=+0.205602610 container start 40bfddc06ce8677d1b46af55625bed22b032d60a306c3ba775e8973fbfb85c1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 30 23:19:03 np0005603435 bash[87901]: 40bfddc06ce8677d1b46af55625bed22b032d60a306c3ba775e8973fbfb85c1c
Jan 30 23:19:03 np0005603435 systemd[1]: Started Ceph osd.2 for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:03 np0005603435 ceph-osd[85822]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 30 23:19:03 np0005603435 ceph-osd[85822]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 30 23:19:03 np0005603435 ceph-osd[85822]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 30 23:19:03 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:03 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: pidfile_write: ignore empty --pid-file
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392400 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117392000 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: load: jerasure load: lrc 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561117393c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount shared_bdev_used = 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: RocksDB version: 7.9.2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Git sha 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: DB SUMMARY
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: DB Session ID:  CTQ67AZ8UYAF1Z2WJQER
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: CURRENT file:  CURRENT
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: IDENTITY file:  IDENTITY
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                         Options.error_if_exists: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.create_if_missing: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                         Options.paranoid_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                                     Options.env: 0x561117223ea0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                                Options.info_log: 0x56111827e8a0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_file_opening_threads: 16
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                              Options.statistics: (nil)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.use_fsync: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.max_log_file_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                         Options.allow_fallocate: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.use_direct_reads: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.create_missing_column_families: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                              Options.db_log_dir: 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                                 Options.wal_dir: db.wal
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.advise_random_on_open: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.write_buffer_manager: 0x561118122b40
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                            Options.rate_limiter: (nil)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.unordered_write: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.row_cache: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                              Options.wal_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.allow_ingest_behind: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.two_write_queues: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.manual_wal_flush: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.wal_compression: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.atomic_flush: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.log_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.allow_data_in_errors: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.db_host_id: __hostname__
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.max_background_jobs: 4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.max_background_compactions: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.max_subcompactions: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.max_open_files: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.bytes_per_sync: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.max_background_flushes: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Compression algorithms supported:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kZSTD supported: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kXpressCompression supported: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kBZip2Compression supported: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kLZ4Compression supported: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kZlibCompression supported: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kSnappyCompression supported: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561117227a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561117227a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561117227a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1872e4c1-e6db-4832-b78a-d50916d2da6d
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833143876832, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833143878365, "job": 1, "event": "recovery_finished"}
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: freelist init
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: freelist _read_cfg
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs umount
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) close
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bdev(0x561118029800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluefs mount shared_bdev_used = 27262976
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: RocksDB version: 7.9.2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Git sha 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: DB SUMMARY
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: DB Session ID:  CTQ67AZ8UYAF1Z2WJQEQ
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: CURRENT file:  CURRENT
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: IDENTITY file:  IDENTITY
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                         Options.error_if_exists: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.create_if_missing: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                         Options.paranoid_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                                     Options.env: 0x56111844ea80
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                                Options.info_log: 0x56111827ea20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_file_opening_threads: 16
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                              Options.statistics: (nil)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.use_fsync: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.max_log_file_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                         Options.allow_fallocate: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.use_direct_reads: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.create_missing_column_families: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                              Options.db_log_dir: 
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                                 Options.wal_dir: db.wal
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.advise_random_on_open: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.write_buffer_manager: 0x561118123900
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                            Options.rate_limiter: (nil)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.unordered_write: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.row_cache: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                              Options.wal_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.allow_ingest_behind: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.two_write_queues: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.manual_wal_flush: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.wal_compression: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.atomic_flush: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.log_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.allow_data_in_errors: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.db_host_id: __hostname__
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.max_background_jobs: 4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.max_background_compactions: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.max_subcompactions: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.max_open_files: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.bytes_per_sync: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.max_background_flushes: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Compression algorithms supported:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kZSTD supported: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kXpressCompression supported: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kBZip2Compression supported: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kLZ4Compression supported: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kZlibCompression supported: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: #011kSnappyCompression supported: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ebc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ebc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ebc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ebc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ebc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ebc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827ebc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611172278d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827f0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561117227a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827f0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561117227a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:           Options.merge_operator: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.compaction_filter_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.sst_partitioner_factory: None
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56111827f0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561117227a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.write_buffer_size: 16777216
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.max_write_buffer_number: 64
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.compression: LZ4
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.num_levels: 7
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.level: 32767
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.compression_opts.strategy: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                  Options.compression_opts.enabled: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.arena_block_size: 1048576
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.disable_auto_compactions: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.inplace_update_support: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.bloom_locality: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                    Options.max_successive_merges: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.paranoid_file_checks: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.force_consistency_checks: 1
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.report_bg_io_stats: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                               Options.ttl: 2592000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                       Options.enable_blob_files: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                           Options.min_blob_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                          Options.blob_file_size: 268435456
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb:                Options.blob_file_starting_level: 0
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1872e4c1-e6db-4832-b78a-d50916d2da6d
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833143933449, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 30 23:19:03 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 30 23:19:04 np0005603435 ceph-osd[87920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833144040316, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833143, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1872e4c1-e6db-4832-b78a-d50916d2da6d", "db_session_id": "CTQ67AZ8UYAF1Z2WJQEQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:19:04 np0005603435 ceph-osd[87920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833144099923, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833144, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1872e4c1-e6db-4832-b78a-d50916d2da6d", "db_session_id": "CTQ67AZ8UYAF1Z2WJQEQ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:19:04 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:04 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:04 np0005603435 ceph-osd[87920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833144150507, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833144, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1872e4c1-e6db-4832-b78a-d50916d2da6d", "db_session_id": "CTQ67AZ8UYAF1Z2WJQEQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:19:04 np0005603435 podman[88400]: 2026-01-31 04:19:04.144106515 +0000 UTC m=+0.027083097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:04 np0005603435 ceph-osd[87920]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833144257684, "job": 1, "event": "recovery_finished"}
Jan 30 23:19:04 np0005603435 ceph-osd[87920]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 30 23:19:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 30 23:19:04 np0005603435 podman[88400]: 2026-01-31 04:19:04.337364534 +0000 UTC m=+0.220341126 container create b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 30 23:19:04 np0005603435 systemd[1]: Started libpod-conmon-b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f.scope.
Jan 30 23:19:04 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 30 23:19:04 np0005603435 podman[88400]: 2026-01-31 04:19:04.784581769 +0000 UTC m=+0.667558401 container init b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:04 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:04 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:04 np0005603435 podman[88400]: 2026-01-31 04:19:04.793201101 +0000 UTC m=+0.676177643 container start b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 30 23:19:04 np0005603435 infallible_nash[88416]: 167 167
Jan 30 23:19:04 np0005603435 systemd[1]: libpod-b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f.scope: Deactivated successfully.
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:04 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:04 np0005603435 podman[88400]: 2026-01-31 04:19:04.911135701 +0000 UTC m=+0.794112323 container attach b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 30 23:19:04 np0005603435 podman[88400]: 2026-01-31 04:19:04.912191276 +0000 UTC m=+0.795167858 container died b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561118462000
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: rocksdb: DB pointer 0x561118438000
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.1 total, 1.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.1 total, 1.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611172278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.1 total, 1.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611172278d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.1 total, 1.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611172278d0#2 capacity: 460.80 MB usag
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: _get_class not permitted to load lua
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: _get_class not permitted to load sdk
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: osd.2 0 load_pgs
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: osd.2 0 load_pgs opened 0 pgs
Jan 30 23:19:05 np0005603435 ceph-osd[87920]: osd.2 0 log_to_monitors true
Jan 30 23:19:05 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2[87916]: 2026-01-31T04:19:05.020+0000 7f57e6cd98c0 -1 osd.2 0 log_to_monitors true
Jan 30 23:19:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 30 23:19:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 30 23:19:05 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:05 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:05 np0005603435 systemd[1]: var-lib-containers-storage-overlay-62878f8a4305c25a49312116bad3b0dcb4de579b4ba5659bda7796f96af5a67d-merged.mount: Deactivated successfully.
Jan 30 23:19:05 np0005603435 podman[88400]: 2026-01-31 04:19:05.409279502 +0000 UTC m=+1.292256054 container remove b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:19:05 np0005603435 systemd[1]: libpod-conmon-b401fb8cf7616537d16daf40bc89b26255a90c9c4ef4ba09f54b21a7e7b6fd6f.scope: Deactivated successfully.
Jan 30 23:19:05 np0005603435 podman[88473]: 2026-01-31 04:19:05.559980911 +0000 UTC m=+0.025387257 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:05 np0005603435 podman[88473]: 2026-01-31 04:19:05.719279813 +0000 UTC m=+0.184686149 container create 6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:06 np0005603435 systemd[1]: Started libpod-conmon-6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f.scope.
Jan 30 23:19:06 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 30 23:19:06 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 30 23:19:06 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96d0f74b5e1cdc873f760776681caceb33bc130665d85934fac23bf12dd23e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: from='osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 30 23:19:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96d0f74b5e1cdc873f760776681caceb33bc130665d85934fac23bf12dd23e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96d0f74b5e1cdc873f760776681caceb33bc130665d85934fac23bf12dd23e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96d0f74b5e1cdc873f760776681caceb33bc130665d85934fac23bf12dd23e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:06 np0005603435 podman[88473]: 2026-01-31 04:19:06.148180607 +0000 UTC m=+0.613586973 container init 6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shannon, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:06 np0005603435 podman[88473]: 2026-01-31 04:19:06.157060006 +0000 UTC m=+0.622466322 container start 6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shannon, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:06 np0005603435 podman[88473]: 2026-01-31 04:19:06.216180784 +0000 UTC m=+0.681587150 container attach 6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:19:06
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:06 np0005603435 lvm[88566]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:19:06 np0005603435 lvm[88566]: VG ceph_vg0 finished
Jan 30 23:19:06 np0005603435 lvm[88569]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:19:06 np0005603435 lvm[88569]: VG ceph_vg1 finished
Jan 30 23:19:06 np0005603435 lvm[88571]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:19:06 np0005603435 lvm[88571]: VG ceph_vg2 finished
Jan 30 23:19:06 np0005603435 optimistic_shannon[88490]: {}
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 21470642176
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 30 23:19:06 np0005603435 systemd[1]: libpod-6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f.scope: Deactivated successfully.
Jan 30 23:19:06 np0005603435 podman[88473]: 2026-01-31 04:19:06.907649504 +0000 UTC m=+1.373055830 container died 6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:19:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: from='osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: from='osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 30 23:19:07 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:07 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f96d0f74b5e1cdc873f760776681caceb33bc130665d85934fac23bf12dd23e6-merged.mount: Deactivated successfully.
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:07 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e15 e15: 3 total, 1 up, 3 in
Jan 30 23:19:07 np0005603435 ceph-osd[87920]: osd.2 0 done with init, starting boot process
Jan 30 23:19:07 np0005603435 ceph-osd[87920]: osd.2 0 start_boot
Jan 30 23:19:07 np0005603435 ceph-osd[87920]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 30 23:19:07 np0005603435 ceph-osd[87920]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 30 23:19:07 np0005603435 ceph-osd[87920]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 30 23:19:07 np0005603435 ceph-osd[87920]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 30 23:19:07 np0005603435 ceph-osd[87920]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 30 23:19:07 np0005603435 podman[88473]: 2026-01-31 04:19:07.860133856 +0000 UTC m=+2.325540212 container remove 6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:07 np0005603435 systemd[1]: libpod-conmon-6a0e4a059b9c55907b96568ea88f6fdb94e84cd73f891a38ae061775ab30be8f.scope: Deactivated successfully.
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 1 up, 3 in
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:07 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:07 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:07 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:08 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:08 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:08 np0005603435 python3[88612]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:08 np0005603435 podman[88614]: 2026-01-31 04:19:08.515768306 +0000 UTC m=+0.037216965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:08 np0005603435 podman[88614]: 2026-01-31 04:19:08.759792898 +0000 UTC m=+0.281241467 container create 35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a (image=quay.io/ceph/ceph:v20, name=happy_kalam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: from='osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:08 np0005603435 systemd[1]: Started libpod-conmon-35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a.scope.
Jan 30 23:19:08 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3242562daea3124f5b84457711303d519c8fe6235d96e93fe77fbf92f1e76adc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3242562daea3124f5b84457711303d519c8fe6235d96e93fe77fbf92f1e76adc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3242562daea3124f5b84457711303d519c8fe6235d96e93fe77fbf92f1e76adc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:08 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:08 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:08 np0005603435 podman[88614]: 2026-01-31 04:19:08.991802107 +0000 UTC m=+0.513250696 container init 35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a (image=quay.io/ceph/ceph:v20, name=happy_kalam, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:09 np0005603435 podman[88614]: 2026-01-31 04:19:09.004434174 +0000 UTC m=+0.525882783 container start 35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a (image=quay.io/ceph/ceph:v20, name=happy_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:09 np0005603435 podman[88614]: 2026-01-31 04:19:09.074782046 +0000 UTC m=+0.596230625 container attach 35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a (image=quay.io/ceph/ceph:v20, name=happy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:09 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:09 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 30 23:19:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8708027' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 30 23:19:09 np0005603435 happy_kalam[88704]: 
Jan 30 23:19:09 np0005603435 happy_kalam[88704]: {"fsid":"95d2f419-0dd0-56f2-a094-353f8c7597ed","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":80,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":15,"num_osds":3,"num_up_osds":1,"osd_up_since":1769833140,"num_in_osds":3,"osd_in_since":1769833124,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":447057920,"bytes_avail":21023584256,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-31T04:17:45:671431+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T04:19:08.261491+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 30 23:19:09 np0005603435 systemd[1]: libpod-35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a.scope: Deactivated successfully.
Jan 30 23:19:09 np0005603435 podman[88614]: 2026-01-31 04:19:09.703654017 +0000 UTC m=+1.225102626 container died 35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a (image=quay.io/ceph/ceph:v20, name=happy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:19:09 np0005603435 podman[88772]: 2026-01-31 04:19:09.735471905 +0000 UTC m=+0.463763184 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:19:09 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:09 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:10 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:10 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 30 23:19:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3242562daea3124f5b84457711303d519c8fe6235d96e93fe77fbf92f1e76adc-merged.mount: Deactivated successfully.
Jan 30 23:19:10 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:10 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:11 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:11 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:11 np0005603435 podman[88614]: 2026-01-31 04:19:11.304024226 +0000 UTC m=+2.825472795 container remove 35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a (image=quay.io/ceph/ceph:v20, name=happy_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:11 np0005603435 podman[88807]: 2026-01-31 04:19:11.436432966 +0000 UTC m=+1.582198673 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:11 np0005603435 podman[88772]: 2026-01-31 04:19:11.556414794 +0000 UTC m=+2.284705993 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:11 np0005603435 python3[88847]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:11 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:11 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:12 np0005603435 podman[88848]: 2026-01-31 04:19:12.024468358 +0000 UTC m=+0.258882762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:12 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:12 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:12 np0005603435 podman[88848]: 2026-01-31 04:19:12.169145316 +0000 UTC m=+0.403559700 container create 8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469 (image=quay.io/ceph/ceph:v20, name=clever_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:19:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 30 23:19:12 np0005603435 systemd[1]: libpod-conmon-35d57a5ad936219db761e1e7ec36d674a34d4137ae84bd7ccabdc2f7a255df6a.scope: Deactivated successfully.
Jan 30 23:19:12 np0005603435 systemd[1]: Started libpod-conmon-8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469.scope.
Jan 30 23:19:12 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2400f17c4ddbd37f6519689794b40de21d496c877ef1866633b24f79624faf2e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2400f17c4ddbd37f6519689794b40de21d496c877ef1866633b24f79624faf2e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:12 np0005603435 podman[88848]: 2026-01-31 04:19:12.568758303 +0000 UTC m=+0.803172757 container init 8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469 (image=quay.io/ceph/ceph:v20, name=clever_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:19:12 np0005603435 podman[88848]: 2026-01-31 04:19:12.575329437 +0000 UTC m=+0.809743801 container start 8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469 (image=quay.io/ceph/ceph:v20, name=clever_dijkstra, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:19:12 np0005603435 podman[88848]: 2026-01-31 04:19:12.886415933 +0000 UTC m=+1.120830327 container attach 8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469 (image=quay.io/ceph/ceph:v20, name=clever_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:12 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:13 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 30 23:19:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3925564539' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:13 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:13 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:13 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:14 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 30 23:19:14 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:14 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3925564539' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e16 e16: 3 total, 1 up, 3 in
Jan 30 23:19:14 np0005603435 clever_dijkstra[88878]: pool 'vms' created
Jan 30 23:19:14 np0005603435 systemd[1]: libpod-8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469.scope: Deactivated successfully.
Jan 30 23:19:14 np0005603435 podman[88848]: 2026-01-31 04:19:14.850570737 +0000 UTC m=+3.084985131 container died 8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469 (image=quay.io/ceph/ceph:v20, name=clever_dijkstra, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:19:14 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3925564539' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:15 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 1 up, 3 in
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:15 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:15 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:15 np0005603435 systemd[1]: var-lib-containers-storage-overlay-2400f17c4ddbd37f6519689794b40de21d496c877ef1866633b24f79624faf2e-merged.mount: Deactivated successfully.
Jan 30 23:19:15 np0005603435 podman[88848]: 2026-01-31 04:19:15.422431978 +0000 UTC m=+3.656846352 container remove 8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469 (image=quay.io/ceph/ceph:v20, name=clever_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:19:15 np0005603435 systemd[1]: libpod-conmon-8ea49f6f5ea332dbafe144c24a6c9211e082de7c9e3cc5b537e13c8508704469.scope: Deactivated successfully.
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 2.962 iops: 758.314 elapsed_sec: 3.956
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: log_channel(cluster) log [WRN] : OSD bench result of 758.313524 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 0 waiting for initial osdmap
Jan 30 23:19:15 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1[86869]: 2026-01-31T04:19:15.554+0000 7f6de1033640 -1 osd.1 0 waiting for initial osdmap
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 16 check_osdmap_features require_osd_release unknown -> tentacle
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 16 set_numa_affinity not setting numa affinity
Jan 30 23:19:15 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-1[86869]: 2026-01-31T04:19:15.667+0000 7f6ddb626640 -1 osd.1 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 30 23:19:15 np0005603435 ceph-osd[86873]: osd.1 16 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 30 23:19:15 np0005603435 podman[89109]: 2026-01-31 04:19:15.690499105 +0000 UTC m=+0.055629708 container create 419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bohr, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:15 np0005603435 python3[89103]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:15 np0005603435 systemd[1]: Started libpod-conmon-419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed.scope.
Jan 30 23:19:15 np0005603435 podman[89109]: 2026-01-31 04:19:15.666942732 +0000 UTC m=+0.032073405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:15 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:15 np0005603435 podman[89125]: 2026-01-31 04:19:15.816091185 +0000 UTC m=+0.064069106 container create 987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879 (image=quay.io/ceph/ceph:v20, name=elastic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3925564539' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:15 np0005603435 podman[89125]: 2026-01-31 04:19:15.771809715 +0000 UTC m=+0.019787656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:15 np0005603435 systemd[1]: Started libpod-conmon-987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879.scope.
Jan 30 23:19:15 np0005603435 podman[89109]: 2026-01-31 04:19:15.925513905 +0000 UTC m=+0.290644528 container init 419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:15 np0005603435 podman[89109]: 2026-01-31 04:19:15.933860971 +0000 UTC m=+0.298991564 container start 419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:19:15 np0005603435 eloquent_bohr[89131]: 167 167
Jan 30 23:19:15 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:15 np0005603435 systemd[1]: libpod-419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed.scope: Deactivated successfully.
Jan 30 23:19:15 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48800ddb5fcd306f412d47bdb5027333b93ba95d124221d2a71086265e69a7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:15 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48800ddb5fcd306f412d47bdb5027333b93ba95d124221d2a71086265e69a7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:15 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:15 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:16 np0005603435 podman[89109]: 2026-01-31 04:19:16.000407894 +0000 UTC m=+0.365538507 container attach 419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bohr, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:16 np0005603435 podman[89109]: 2026-01-31 04:19:16.0010921 +0000 UTC m=+0.366222723 container died 419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:19:16 np0005603435 podman[89125]: 2026-01-31 04:19:16.049070297 +0000 UTC m=+0.297048208 container init 987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879 (image=quay.io/ceph/ceph:v20, name=elastic_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:16 np0005603435 podman[89125]: 2026-01-31 04:19:16.05428678 +0000 UTC m=+0.302264701 container start 987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879 (image=quay.io/ceph/ceph:v20, name=elastic_babbage, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:16 np0005603435 podman[89125]: 2026-01-31 04:19:16.084790366 +0000 UTC m=+0.332768297 container attach 987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879 (image=quay.io/ceph/ceph:v20, name=elastic_babbage, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 30 23:19:16 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5d0e14e489335d2fb40c1f8672039a326068c6232e801f7b6df75ea9759cc3b5-merged.mount: Deactivated successfully.
Jan 30 23:19:16 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3482997597; not ready for session (expect reconnect)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:16 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 30 23:19:16 np0005603435 podman[89109]: 2026-01-31 04:19:16.200087604 +0000 UTC m=+0.565218197 container remove 419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bohr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 30 23:19:16 np0005603435 systemd[1]: libpod-conmon-419a1f8112d1d166e09edcc892e6ada1a82ca92c45e7dbaff48556eb3af279ed.scope: Deactivated successfully.
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597] boot
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:16 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v43: 2 pgs: 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 30 23:19:16 np0005603435 ceph-osd[86873]: osd.1 17 state: booting -> active
Jan 30 23:19:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 pi=[12,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:19:16 np0005603435 podman[89187]: 2026-01-31 04:19:16.325664054 +0000 UTC m=+0.052375901 container create e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sammet, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:16 np0005603435 podman[89187]: 2026-01-31 04:19:16.296806576 +0000 UTC m=+0.023518463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:16 np0005603435 systemd[1]: Started libpod-conmon-e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c.scope.
Jan 30 23:19:16 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb3cea314cbe56d8e74faa47a7f84ac2a4856c5dc5877689cd2d813de51532f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb3cea314cbe56d8e74faa47a7f84ac2a4856c5dc5877689cd2d813de51532f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb3cea314cbe56d8e74faa47a7f84ac2a4856c5dc5877689cd2d813de51532f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdb3cea314cbe56d8e74faa47a7f84ac2a4856c5dc5877689cd2d813de51532f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:16 np0005603435 podman[89187]: 2026-01-31 04:19:16.450652669 +0000 UTC m=+0.177364526 container init e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sammet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:19:16 np0005603435 podman[89187]: 2026-01-31 04:19:16.45834923 +0000 UTC m=+0.185061077 container start e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:19:16 np0005603435 podman[89187]: 2026-01-31 04:19:16.466963643 +0000 UTC m=+0.193675510 container attach e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2191346511' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: OSD bench result of 758.313524 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: osd.1 [v2:192.168.122.100:6806/3482997597,v1:192.168.122.100:6807/3482997597] boot
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2191346511' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:16 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:16 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:17 np0005603435 sad_sammet[89203]: [
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:    {
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "available": false,
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "being_replaced": false,
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "ceph_device_lvm": false,
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "lsm_data": {},
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "lvs": [],
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "path": "/dev/sr0",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "rejected_reasons": [
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "Insufficient space (<5GB)",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "Has a FileSystem"
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        ],
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        "sys_api": {
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "actuators": null,
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "device_nodes": [
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:                "sr0"
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            ],
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "devname": "sr0",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "human_readable_size": "482.00 KB",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "id_bus": "ata",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "model": "QEMU DVD-ROM",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "nr_requests": "2",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "parent": "/dev/sr0",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "partitions": {},
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "path": "/dev/sr0",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "removable": "1",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "rev": "2.5+",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "ro": "0",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "rotational": "1",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "sas_address": "",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "sas_device_handle": "",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "scheduler_mode": "mq-deadline",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "sectors": 0,
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "sectorsize": "2048",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "size": 493568.0,
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "support_discard": "2048",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "type": "disk",
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:            "vendor": "QEMU"
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:        }
Jan 30 23:19:17 np0005603435 sad_sammet[89203]:    }
Jan 30 23:19:17 np0005603435 sad_sammet[89203]: ]
Jan 30 23:19:17 np0005603435 systemd[1]: libpod-e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c.scope: Deactivated successfully.
Jan 30 23:19:17 np0005603435 podman[89867]: 2026-01-31 04:19:17.069524066 +0000 UTC m=+0.033065358 container died e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:19:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay-cdb3cea314cbe56d8e74faa47a7f84ac2a4856c5dc5877689cd2d813de51532f-merged.mount: Deactivated successfully.
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 30 23:19:17 np0005603435 podman[89867]: 2026-01-31 04:19:17.242860506 +0000 UTC m=+0.206401758 container remove e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sammet, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:17 np0005603435 systemd[1]: libpod-conmon-e0da4edacf17c4bfcbc1958d30c98330116ad3de44f08ef8060a7cb7ffdb9f5c.scope: Deactivated successfully.
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2191346511' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Jan 30 23:19:17 np0005603435 elastic_babbage[89143]: pool 'volumes' created
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:17 np0005603435 systemd[1]: libpod-987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879.scope: Deactivated successfully.
Jan 30 23:19:17 np0005603435 podman[89125]: 2026-01-31 04:19:17.31157108 +0000 UTC m=+1.559549041 container died 987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879 (image=quay.io/ceph/ceph:v20, name=elastic_babbage, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 18 pg[1.0( empty local-lis/les=17/18 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 pi=[12,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b48800ddb5fcd306f412d47bdb5027333b93ba95d124221d2a71086265e69a7d-merged.mount: Deactivated successfully.
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43684k
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43684k
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44732552: error parsing value: Value '44732552' is below minimum 939524096
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44732552: error parsing value: Value '44732552' is below minimum 939524096
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] creating main.db for devicehealth
Jan 30 23:19:17 np0005603435 podman[89125]: 2026-01-31 04:19:17.532474818 +0000 UTC m=+1.780452749 container remove 987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879 (image=quay.io/ceph/ceph:v20, name=elastic_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 systemd[1]: libpod-conmon-987cbe2827a2aa80105d36839bbab23dd4b298735d4016ef2d78134cd1cae879.scope: Deactivated successfully.
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] Check health
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 30 23:19:17 np0005603435 python3[89985]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2191346511' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 30 23:19:17 np0005603435 podman[89991]: 2026-01-31 04:19:17.908647304 +0000 UTC m=+0.069128595 container create fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2 (image=quay.io/ceph/ceph:v20, name=crazy_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:17 np0005603435 systemd[1]: Started libpod-conmon-fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2.scope.
Jan 30 23:19:17 np0005603435 podman[89991]: 2026-01-31 04:19:17.870074538 +0000 UTC m=+0.030555869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:17 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:17 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a62ba28a5e564d2e35118f32a0d0afdc7e45524c7aca93d9b4d2bfcdb072cb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a62ba28a5e564d2e35118f32a0d0afdc7e45524c7aca93d9b4d2bfcdb072cb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:18 np0005603435 podman[90017]: 2026-01-31 04:19:18.007432314 +0000 UTC m=+0.066363999 container create a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:19:18 np0005603435 podman[89991]: 2026-01-31 04:19:18.023974443 +0000 UTC m=+0.184455734 container init fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2 (image=quay.io/ceph/ceph:v20, name=crazy_dhawan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:19:18 np0005603435 podman[89991]: 2026-01-31 04:19:18.030543197 +0000 UTC m=+0.191024488 container start fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2 (image=quay.io/ceph/ceph:v20, name=crazy_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 30 23:19:18 np0005603435 podman[90017]: 2026-01-31 04:19:17.972840842 +0000 UTC m=+0.031772577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:18 np0005603435 podman[89991]: 2026-01-31 04:19:18.071590591 +0000 UTC m=+0.232071872 container attach fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2 (image=quay.io/ceph/ceph:v20, name=crazy_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:19:18 np0005603435 systemd[1]: Started libpod-conmon-a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36.scope.
Jan 30 23:19:18 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:18 np0005603435 podman[90017]: 2026-01-31 04:19:18.143442349 +0000 UTC m=+0.202374054 container init a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:18 np0005603435 podman[90017]: 2026-01-31 04:19:18.148818075 +0000 UTC m=+0.207749770 container start a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:19:18 np0005603435 upbeat_allen[90040]: 167 167
Jan 30 23:19:18 np0005603435 systemd[1]: libpod-a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36.scope: Deactivated successfully.
Jan 30 23:19:18 np0005603435 podman[90017]: 2026-01-31 04:19:18.166267625 +0000 UTC m=+0.225199350 container attach a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:18 np0005603435 podman[90017]: 2026-01-31 04:19:18.166714156 +0000 UTC m=+0.225645861 container died a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 30 23:19:18 np0005603435 systemd[1]: var-lib-containers-storage-overlay-22b6e9aa132bdfa1e42e03f11b051fbc03b777ab3da363a7efe84943f6834f98-merged.mount: Deactivated successfully.
Jan 30 23:19:18 np0005603435 podman[90017]: 2026-01-31 04:19:18.261283797 +0000 UTC m=+0.320215492 container remove a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:19:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v45: 3 pgs: 1 creating+peering, 2 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 30 23:19:18 np0005603435 systemd[1]: libpod-conmon-a5bb071f4ba402940603737606387ac4dfadfd94261a604a219f6c6b6b609e36.scope: Deactivated successfully.
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:18 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.844 iops: 6104.147 elapsed_sec: 0.491
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: log_channel(cluster) log [WRN] : OSD bench result of 6104.147077 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 0 waiting for initial osdmap
Jan 30 23:19:18 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:19:18 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2[87916]: 2026-01-31T04:19:18.332+0000 7f57e2c5b640 -1 osd.2 0 waiting for initial osdmap
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 19 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 19 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 19 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 19 check_osdmap_features require_osd_release unknown -> tentacle
Jan 30 23:19:18 np0005603435 podman[90085]: 2026-01-31 04:19:18.373267557 +0000 UTC m=+0.036056438 container create 824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_einstein, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 19 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 19 set_numa_affinity not setting numa affinity
Jan 30 23:19:18 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-osd-2[87916]: 2026-01-31T04:19:18.377+0000 7f57dda60640 -1 osd.2 19 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 30 23:19:18 np0005603435 ceph-osd[87920]: osd.2 19 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 30 23:19:18 np0005603435 systemd[1]: Started libpod-conmon-824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf.scope.
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/893550259' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:18 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8371c6b33a21592b2369aca7360747f492799463f6a06745af8205feb97a29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8371c6b33a21592b2369aca7360747f492799463f6a06745af8205feb97a29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8371c6b33a21592b2369aca7360747f492799463f6a06745af8205feb97a29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8371c6b33a21592b2369aca7360747f492799463f6a06745af8205feb97a29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8371c6b33a21592b2369aca7360747f492799463f6a06745af8205feb97a29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:18 np0005603435 podman[90085]: 2026-01-31 04:19:18.448594466 +0000 UTC m=+0.111383357 container init 824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:19:18 np0005603435 podman[90085]: 2026-01-31 04:19:18.357082617 +0000 UTC m=+0.019871518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:18 np0005603435 podman[90085]: 2026-01-31 04:19:18.454185148 +0000 UTC m=+0.116974039 container start 824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:19:18 np0005603435 podman[90085]: 2026-01-31 04:19:18.460909216 +0000 UTC m=+0.123698097 container attach 824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_einstein, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:18 np0005603435 frosty_einstein[90102]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:19:18 np0005603435 frosty_einstein[90102]: --> All data devices are unavailable
Jan 30 23:19:18 np0005603435 systemd[1]: libpod-824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf.scope: Deactivated successfully.
Jan 30 23:19:18 np0005603435 podman[90085]: 2026-01-31 04:19:18.871920019 +0000 UTC m=+0.534708950 container died 824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:18 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8e8371c6b33a21592b2369aca7360747f492799463f6a06745af8205feb97a29-merged.mount: Deactivated successfully.
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wyngmr(active, since 72s)
Jan 30 23:19:18 np0005603435 podman[90085]: 2026-01-31 04:19:18.92982083 +0000 UTC m=+0.592609731 container remove 824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_einstein, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: Adjusting osd_memory_target on compute-0 to 43684k
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: Unable to set osd_memory_target on compute-0 to 44732552: error parsing value: Value '44732552' is below minimum 939524096
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/893550259' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:18 np0005603435 systemd[1]: libpod-conmon-824f2f4b4338573598728e8bfb86a49f419fab9320d27183cf02f1d080e13abf.scope: Deactivated successfully.
Jan 30 23:19:18 np0005603435 ceph-mgr[75599]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2641532065; not ready for session (expect reconnect)
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:18 np0005603435 ceph-mgr[75599]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 30 23:19:19 np0005603435 ceph-osd[87920]: osd.2 19 tick checking mon for new map
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/893550259' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 30 23:19:19 np0005603435 crazy_dhawan[90032]: pool 'backups' created
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065] boot
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 30 23:19:19 np0005603435 podman[90197]: 2026-01-31 04:19:19.334472944 +0000 UTC m=+0.035109476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:19 np0005603435 systemd[1]: libpod-fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2.scope: Deactivated successfully.
Jan 30 23:19:19 np0005603435 podman[90197]: 2026-01-31 04:19:19.467764145 +0000 UTC m=+0.168400657 container create 296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:19 np0005603435 ceph-osd[87920]: osd.2 20 state: booting -> active
Jan 30 23:19:19 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 pi=[16,20)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:19:19 np0005603435 systemd[1]: Started libpod-conmon-296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8.scope.
Jan 30 23:19:19 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:19 np0005603435 podman[90197]: 2026-01-31 04:19:19.765952659 +0000 UTC m=+0.466589211 container init 296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:19 np0005603435 podman[89991]: 2026-01-31 04:19:19.77111096 +0000 UTC m=+1.931592281 container died fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2 (image=quay.io/ceph/ceph:v20, name=crazy_dhawan, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:19 np0005603435 podman[90197]: 2026-01-31 04:19:19.772498972 +0000 UTC m=+0.473135474 container start 296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feistel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:19:19 np0005603435 intelligent_feistel[90223]: 167 167
Jan 30 23:19:19 np0005603435 systemd[1]: libpod-296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8.scope: Deactivated successfully.
Jan 30 23:19:19 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:19:19 np0005603435 podman[90197]: 2026-01-31 04:19:19.800439059 +0000 UTC m=+0.501075661 container attach 296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:19:19 np0005603435 podman[90197]: 2026-01-31 04:19:19.80135822 +0000 UTC m=+0.501994822 container died 296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feistel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1a62ba28a5e564d2e35118f32a0d0afdc7e45524c7aca93d9b4d2bfcdb072cb5-merged.mount: Deactivated successfully.
Jan 30 23:19:19 np0005603435 podman[90212]: 2026-01-31 04:19:19.864915563 +0000 UTC m=+0.409559861 container remove fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2 (image=quay.io/ceph/ceph:v20, name=crazy_dhawan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:19:19 np0005603435 systemd[1]: libpod-conmon-fe01a46391576466733645327dd0fb3dcd1e19800b707cb507f95b09a2984ed2.scope: Deactivated successfully.
Jan 30 23:19:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-32304da9548d05a319cf452a693e891a570b5817abb465f6a49c064ea94df9dc-merged.mount: Deactivated successfully.
Jan 30 23:19:19 np0005603435 podman[90197]: 2026-01-31 04:19:19.897447707 +0000 UTC m=+0.598084209 container remove 296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feistel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:19:19 np0005603435 systemd[1]: libpod-conmon-296a6333505b13a0e16146fe679d18cfc39404dd86656d5f6d1406b058be98e8.scope: Deactivated successfully.
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: OSD bench result of 6104.147077 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/893550259' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:19 np0005603435 ceph-mon[75307]: osd.2 [v2:192.168.122.100:6810/2641532065,v1:192.168.122.100:6811/2641532065] boot
Jan 30 23:19:20 np0005603435 podman[90281]: 2026-01-31 04:19:20.072640282 +0000 UTC m=+0.060622885 container create 5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:20 np0005603435 systemd[1]: Started libpod-conmon-5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989.scope.
Jan 30 23:19:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69a9f16f782fc35f8a6f1f7f7f4fce7626f8bea3a0f3fd183e141a3bedb3448/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69a9f16f782fc35f8a6f1f7f7f4fce7626f8bea3a0f3fd183e141a3bedb3448/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69a9f16f782fc35f8a6f1f7f7f4fce7626f8bea3a0f3fd183e141a3bedb3448/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c69a9f16f782fc35f8a6f1f7f7f4fce7626f8bea3a0f3fd183e141a3bedb3448/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:20 np0005603435 podman[90281]: 2026-01-31 04:19:20.046514989 +0000 UTC m=+0.034497682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:20 np0005603435 python3[90283]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:20 np0005603435 podman[90281]: 2026-01-31 04:19:20.178525689 +0000 UTC m=+0.166508302 container init 5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:19:20 np0005603435 podman[90281]: 2026-01-31 04:19:20.189397545 +0000 UTC m=+0.177380148 container start 5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:19:20 np0005603435 podman[90281]: 2026-01-31 04:19:20.197063505 +0000 UTC m=+0.185046108 container attach 5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:19:20 np0005603435 podman[90302]: 2026-01-31 04:19:20.23561456 +0000 UTC m=+0.058608017 container create d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3 (image=quay.io/ceph/ceph:v20, name=optimistic_mayer, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:19:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v48: 4 pgs: 2 creating+peering, 2 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:20 np0005603435 systemd[1]: Started libpod-conmon-d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3.scope.
Jan 30 23:19:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc890b8d210709cfe767cdb2b9b040c2b0990df32193d5d8ea67fe6b28fb915/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc890b8d210709cfe767cdb2b9b040c2b0990df32193d5d8ea67fe6b28fb915/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:20 np0005603435 podman[90302]: 2026-01-31 04:19:20.211186676 +0000 UTC m=+0.034180203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:20 np0005603435 podman[90302]: 2026-01-31 04:19:20.322690655 +0000 UTC m=+0.145684182 container init d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3 (image=quay.io/ceph/ceph:v20, name=optimistic_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:20 np0005603435 podman[90302]: 2026-01-31 04:19:20.330857707 +0000 UTC m=+0.153851154 container start d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3 (image=quay.io/ceph/ceph:v20, name=optimistic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:20 np0005603435 podman[90302]: 2026-01-31 04:19:20.336705155 +0000 UTC m=+0.159698642 container attach d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3 (image=quay.io/ceph/ceph:v20, name=optimistic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:19:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 30 23:19:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 30 23:19:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 30 23:19:20 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:19:20 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 pi=[16,20)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:19:20 np0005603435 clever_jennings[90299]: {
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:    "0": [
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:        {
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "devices": [
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "/dev/loop3"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            ],
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_name": "ceph_lv0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_size": "21470642176",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "name": "ceph_lv0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "tags": {
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.crush_device_class": "",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.encrypted": "0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osd_id": "0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.type": "block",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.vdo": "0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.with_tpm": "0"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            },
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "type": "block",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "vg_name": "ceph_vg0"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:        }
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:    ],
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:    "1": [
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:        {
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "devices": [
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "/dev/loop4"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            ],
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_name": "ceph_lv1",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_size": "21470642176",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "name": "ceph_lv1",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "tags": {
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.crush_device_class": "",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.encrypted": "0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osd_id": "1",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.type": "block",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.vdo": "0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.with_tpm": "0"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            },
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "type": "block",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "vg_name": "ceph_vg1"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:        }
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:    ],
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:    "2": [
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:        {
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "devices": [
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "/dev/loop5"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            ],
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_name": "ceph_lv2",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_size": "21470642176",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "name": "ceph_lv2",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "tags": {
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.crush_device_class": "",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.encrypted": "0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osd_id": "2",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.type": "block",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.vdo": "0",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:                "ceph.with_tpm": "0"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            },
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "type": "block",
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:            "vg_name": "ceph_vg2"
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:        }
Jan 30 23:19:20 np0005603435 clever_jennings[90299]:    ]
Jan 30 23:19:20 np0005603435 clever_jennings[90299]: }
Jan 30 23:19:20 np0005603435 systemd[1]: libpod-5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989.scope: Deactivated successfully.
Jan 30 23:19:20 np0005603435 podman[90346]: 2026-01-31 04:19:20.544583937 +0000 UTC m=+0.025220193 container died 5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jennings, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c69a9f16f782fc35f8a6f1f7f7f4fce7626f8bea3a0f3fd183e141a3bedb3448-merged.mount: Deactivated successfully.
Jan 30 23:19:20 np0005603435 podman[90346]: 2026-01-31 04:19:20.690917093 +0000 UTC m=+0.171553329 container remove 5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_jennings, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:20 np0005603435 systemd[1]: libpod-conmon-5a72aa8728ee77657d33cd9126f09f5c68f42b87732c5a46a914c995d6b37989.scope: Deactivated successfully.
Jan 30 23:19:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 30 23:19:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3836117066' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:21 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3836117066' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:21 np0005603435 podman[90426]: 2026-01-31 04:19:21.15059664 +0000 UTC m=+0.075706589 container create ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:21 np0005603435 podman[90426]: 2026-01-31 04:19:21.100268028 +0000 UTC m=+0.025377957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:21 np0005603435 systemd[1]: Started libpod-conmon-ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3.scope.
Jan 30 23:19:21 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:21 np0005603435 podman[90426]: 2026-01-31 04:19:21.398087632 +0000 UTC m=+0.323197631 container init ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:19:21 np0005603435 podman[90426]: 2026-01-31 04:19:21.404434988 +0000 UTC m=+0.329544937 container start ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:19:21 np0005603435 thirsty_ardinghelli[90443]: 167 167
Jan 30 23:19:21 np0005603435 systemd[1]: libpod-ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3.scope: Deactivated successfully.
Jan 30 23:19:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 30 23:19:21 np0005603435 podman[90426]: 2026-01-31 04:19:21.423371634 +0000 UTC m=+0.348481813 container attach ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_ardinghelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:19:21 np0005603435 podman[90426]: 2026-01-31 04:19:21.424919807 +0000 UTC m=+0.350029766 container died ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_ardinghelli, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3836117066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 30 23:19:21 np0005603435 optimistic_mayer[90319]: pool 'images' created
Jan 30 23:19:21 np0005603435 systemd[1]: libpod-d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3.scope: Deactivated successfully.
Jan 30 23:19:21 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 30 23:19:21 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:19:21 np0005603435 podman[90302]: 2026-01-31 04:19:21.548541048 +0000 UTC m=+1.371534565 container died d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3 (image=quay.io/ceph/ceph:v20, name=optimistic_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay-580e95e1c83afab3524ba99ec4dd2f46c4878c620c600c6a6bb6c29eaf4f4f9e-merged.mount: Deactivated successfully.
Jan 30 23:19:21 np0005603435 podman[90426]: 2026-01-31 04:19:21.847208273 +0000 UTC m=+0.772318222 container remove ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_ardinghelli, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 30 23:19:22 np0005603435 systemd[1]: var-lib-containers-storage-overlay-abc890b8d210709cfe767cdb2b9b040c2b0990df32193d5d8ea67fe6b28fb915-merged.mount: Deactivated successfully.
Jan 30 23:19:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v51: 5 pgs: 3 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:22 np0005603435 podman[90460]: 2026-01-31 04:19:22.273434201 +0000 UTC m=+0.764223367 container remove d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3 (image=quay.io/ceph/ceph:v20, name=optimistic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 30 23:19:22 np0005603435 systemd[1]: libpod-conmon-d004b25c231dcbd6280f6ca403d383875744df218d7dd82ecd3ec9f0f7504fb3.scope: Deactivated successfully.
Jan 30 23:19:22 np0005603435 systemd[1]: libpod-conmon-ad130522c42405eb3fd90df4eb6310b5a08983a60048ffdd556b427fdb587fe3.scope: Deactivated successfully.
Jan 30 23:19:22 np0005603435 podman[90483]: 2026-01-31 04:19:22.330552006 +0000 UTC m=+0.368350899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:22 np0005603435 podman[90483]: 2026-01-31 04:19:22.445830799 +0000 UTC m=+0.483629702 container create 12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:22 np0005603435 python3[90522]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 30 23:19:22 np0005603435 systemd[1]: Started libpod-conmon-12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598.scope.
Jan 30 23:19:22 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a84ae06be0b4ec1c2c2ba1e972fc27b6da71139ba0443c4cca8e0af6245220e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a84ae06be0b4ec1c2c2ba1e972fc27b6da71139ba0443c4cca8e0af6245220e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a84ae06be0b4ec1c2c2ba1e972fc27b6da71139ba0443c4cca8e0af6245220e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a84ae06be0b4ec1c2c2ba1e972fc27b6da71139ba0443c4cca8e0af6245220e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:22 np0005603435 podman[90523]: 2026-01-31 04:19:22.646814599 +0000 UTC m=+0.047069541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:22 np0005603435 podman[90523]: 2026-01-31 04:19:22.808756211 +0000 UTC m=+0.209011143 container create 7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b (image=quay.io/ceph/ceph:v20, name=nice_hypatia, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:19:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 30 23:19:22 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3836117066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:22 np0005603435 systemd[1]: Started libpod-conmon-7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b.scope.
Jan 30 23:19:22 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ccbdd5f900913b2656b85eee92069ced1dda6160a3a89a0c96b2c83e108f77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ccbdd5f900913b2656b85eee92069ced1dda6160a3a89a0c96b2c83e108f77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:22 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 30 23:19:22 np0005603435 podman[90483]: 2026-01-31 04:19:22.922123343 +0000 UTC m=+0.959922246 container init 12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:22 np0005603435 podman[90483]: 2026-01-31 04:19:22.927429906 +0000 UTC m=+0.965228799 container start 12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:22 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:19:22 np0005603435 podman[90523]: 2026-01-31 04:19:22.993868041 +0000 UTC m=+0.394123053 container init 7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b (image=quay.io/ceph/ceph:v20, name=nice_hypatia, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:19:22 np0005603435 podman[90523]: 2026-01-31 04:19:22.998212994 +0000 UTC m=+0.398467916 container start 7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b (image=quay.io/ceph/ceph:v20, name=nice_hypatia, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:19:23 np0005603435 podman[90523]: 2026-01-31 04:19:23.10015738 +0000 UTC m=+0.500412342 container attach 7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b (image=quay.io/ceph/ceph:v20, name=nice_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:23 np0005603435 podman[90483]: 2026-01-31 04:19:23.259867155 +0000 UTC m=+1.297666018 container attach 12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 30 23:19:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1473013565' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:23 np0005603435 lvm[90644]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:19:23 np0005603435 lvm[90644]: VG ceph_vg1 finished
Jan 30 23:19:23 np0005603435 lvm[90641]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:19:23 np0005603435 lvm[90641]: VG ceph_vg0 finished
Jan 30 23:19:23 np0005603435 lvm[90646]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:19:23 np0005603435 lvm[90646]: VG ceph_vg2 finished
Jan 30 23:19:23 np0005603435 wonderful_hertz[90536]: {}
Jan 30 23:19:23 np0005603435 systemd[1]: libpod-12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598.scope: Deactivated successfully.
Jan 30 23:19:23 np0005603435 podman[90483]: 2026-01-31 04:19:23.813134689 +0000 UTC m=+1.850933582 container died 12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:23 np0005603435 systemd[1]: libpod-12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598.scope: Consumed 1.173s CPU time.
Jan 30 23:19:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 30 23:19:24 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1473013565' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v53: 5 pgs: 3 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1473013565' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 30 23:19:24 np0005603435 nice_hypatia[90542]: pool 'cephfs.cephfs.meta' created
Jan 30 23:19:24 np0005603435 systemd[1]: var-lib-containers-storage-overlay-6a84ae06be0b4ec1c2c2ba1e972fc27b6da71139ba0443c4cca8e0af6245220e-merged.mount: Deactivated successfully.
Jan 30 23:19:24 np0005603435 systemd[1]: libpod-7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b.scope: Deactivated successfully.
Jan 30 23:19:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 30 23:19:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:19:24 np0005603435 podman[90483]: 2026-01-31 04:19:24.718260637 +0000 UTC m=+2.756059530 container remove 12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:24 np0005603435 podman[90523]: 2026-01-31 04:19:24.779294546 +0000 UTC m=+2.179549498 container died 7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b (image=quay.io/ceph/ceph:v20, name=nice_hypatia, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:19:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:24 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b1ccbdd5f900913b2656b85eee92069ced1dda6160a3a89a0c96b2c83e108f77-merged.mount: Deactivated successfully.
Jan 30 23:19:25 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1473013565' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:25 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:25 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:25 np0005603435 podman[90662]: 2026-01-31 04:19:25.149392722 +0000 UTC m=+0.682009946 container remove 7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b (image=quay.io/ceph/ceph:v20, name=nice_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:19:25 np0005603435 systemd[1]: libpod-conmon-7b8601e8e504f2a04cde9204e8120712b83bf4a4333cc3b5e3104036679a696b.scope: Deactivated successfully.
Jan 30 23:19:25 np0005603435 systemd[1]: libpod-conmon-12b8bebbb0c449adb8b8c5674a7dedd503dbb4d63d2ee6ccd918a22d5d3bf598.scope: Deactivated successfully.
Jan 30 23:19:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 30 23:19:25 np0005603435 python3[90776]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 30 23:19:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 30 23:19:25 np0005603435 podman[90794]: 2026-01-31 04:19:25.512345615 +0000 UTC m=+0.041338227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:25 np0005603435 podman[90794]: 2026-01-31 04:19:25.635366323 +0000 UTC m=+0.164358905 container create 39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d (image=quay.io/ceph/ceph:v20, name=sleepy_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:19:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:19:25 np0005603435 systemd[1]: Started libpod-conmon-39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d.scope.
Jan 30 23:19:25 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93e7d01c6bd2033fc74e2d61f922c8d266527b59294afcbe29793b90912d2b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93e7d01c6bd2033fc74e2d61f922c8d266527b59294afcbe29793b90912d2b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:25 np0005603435 podman[90794]: 2026-01-31 04:19:25.823787133 +0000 UTC m=+0.352779715 container init 39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d (image=quay.io/ceph/ceph:v20, name=sleepy_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:19:25 np0005603435 podman[90794]: 2026-01-31 04:19:25.835986855 +0000 UTC m=+0.364979437 container start 39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d (image=quay.io/ceph/ceph:v20, name=sleepy_archimedes, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:19:25 np0005603435 podman[90794]: 2026-01-31 04:19:25.913469256 +0000 UTC m=+0.442461848 container attach 39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d (image=quay.io/ceph/ceph:v20, name=sleepy_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:26 np0005603435 podman[90835]: 2026-01-31 04:19:26.138193745 +0000 UTC m=+0.382910042 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 30 23:19:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/441466777' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v56: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:26 np0005603435 podman[90879]: 2026-01-31 04:19:26.308483467 +0000 UTC m=+0.059159300 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:19:26 np0005603435 podman[90835]: 2026-01-31 04:19:26.387745676 +0000 UTC m=+0.632462013 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:19:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 30 23:19:26 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/441466777' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 30 23:19:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/441466777' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 30 23:19:26 np0005603435 sleepy_archimedes[90833]: pool 'cephfs.cephfs.data' created
Jan 30 23:19:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 30 23:19:26 np0005603435 systemd[1]: libpod-39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d.scope: Deactivated successfully.
Jan 30 23:19:26 np0005603435 podman[90794]: 2026-01-31 04:19:26.82685449 +0000 UTC m=+1.355847082 container died 39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d (image=quay.io/ceph/ceph:v20, name=sleepy_archimedes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:27 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d93e7d01c6bd2033fc74e2d61f922c8d266527b59294afcbe29793b90912d2b6-merged.mount: Deactivated successfully.
Jan 30 23:19:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:19:27 np0005603435 podman[90794]: 2026-01-31 04:19:27.239804177 +0000 UTC m=+1.768796789 container remove 39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d (image=quay.io/ceph/ceph:v20, name=sleepy_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:19:27 np0005603435 systemd[1]: libpod-conmon-39fea7224e6b0bac268647cc95c5192a3fa4baec24adcdf42da1ca6abdffc06d.scope: Deactivated successfully.
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:27 np0005603435 python3[91039]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:27 np0005603435 podman[91051]: 2026-01-31 04:19:27.783490606 +0000 UTC m=+0.115209532 container create fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad (image=quay.io/ceph/ceph:v20, name=amazing_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:27 np0005603435 podman[91051]: 2026-01-31 04:19:27.693949936 +0000 UTC m=+0.025668862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/441466777' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:27 np0005603435 systemd[1]: Started libpod-conmon-fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad.scope.
Jan 30 23:19:27 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:27 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0450803f6bbf0011704a1be522da9e73fd4b5400b70003c3aa73a61e7a3010ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:27 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0450803f6bbf0011704a1be522da9e73fd4b5400b70003c3aa73a61e7a3010ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 30 23:19:27 np0005603435 podman[91051]: 2026-01-31 04:19:27.993731663 +0000 UTC m=+0.325450619 container init fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad (image=quay.io/ceph/ceph:v20, name=amazing_brahmagupta, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:19:27 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 30 23:19:28 np0005603435 podman[91051]: 2026-01-31 04:19:28.002322607 +0000 UTC m=+0.334041493 container start fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad (image=quay.io/ceph/ceph:v20, name=amazing_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:28 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:19:28 np0005603435 podman[91051]: 2026-01-31 04:19:28.164559226 +0000 UTC m=+0.496278152 container attach fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad (image=quay.io/ceph/ceph:v20, name=amazing_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 30 23:19:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1854624934' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 30 23:19:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1854624934' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 30 23:19:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 30 23:19:29 np0005603435 amazing_brahmagupta[91116]: enabled application 'rbd' on pool 'vms'
Jan 30 23:19:29 np0005603435 podman[91235]: 2026-01-31 04:19:29.103504331 +0000 UTC m=+0.070650496 container create 6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_chaum, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:29 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1854624934' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 30 23:19:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:19:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:19:29 np0005603435 systemd[1]: libpod-fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad.scope: Deactivated successfully.
Jan 30 23:19:29 np0005603435 podman[91235]: 2026-01-31 04:19:29.067386926 +0000 UTC m=+0.034533111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:29 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 30 23:19:29 np0005603435 podman[91051]: 2026-01-31 04:19:29.251640327 +0000 UTC m=+1.583359233 container died fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad (image=quay.io/ceph/ceph:v20, name=amazing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 30 23:19:29 np0005603435 systemd[1]: Started libpod-conmon-6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6.scope.
Jan 30 23:19:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:29 np0005603435 podman[91235]: 2026-01-31 04:19:29.382641126 +0000 UTC m=+0.349787381 container init 6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_chaum, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:19:29 np0005603435 podman[91235]: 2026-01-31 04:19:29.391844933 +0000 UTC m=+0.358991128 container start 6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_chaum, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:29 np0005603435 happy_chaum[91264]: 167 167
Jan 30 23:19:29 np0005603435 systemd[1]: libpod-6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6.scope: Deactivated successfully.
Jan 30 23:19:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0450803f6bbf0011704a1be522da9e73fd4b5400b70003c3aa73a61e7a3010ab-merged.mount: Deactivated successfully.
Jan 30 23:19:29 np0005603435 podman[91250]: 2026-01-31 04:19:29.568051302 +0000 UTC m=+0.428130482 container remove fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad (image=quay.io/ceph/ceph:v20, name=amazing_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:29 np0005603435 systemd[1]: libpod-conmon-fb9240b2063c943f14d7b8169f61ccb19dae5baa7b6ca05c88f45ed31837dfad.scope: Deactivated successfully.
Jan 30 23:19:29 np0005603435 podman[91235]: 2026-01-31 04:19:29.662574749 +0000 UTC m=+0.629721004 container attach 6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 30 23:19:29 np0005603435 podman[91235]: 2026-01-31 04:19:29.663291964 +0000 UTC m=+0.630438169 container died 6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_chaum, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e0bde447814052d978168c41486fe535667e3accf0ef724a4e00ba83548819a5-merged.mount: Deactivated successfully.
Jan 30 23:19:29 np0005603435 python3[91308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:29 np0005603435 podman[91271]: 2026-01-31 04:19:29.911571218 +0000 UTC m=+0.497886837 container remove 6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_chaum, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:29 np0005603435 systemd[1]: libpod-conmon-6156ab776b30040dd3c78b9340255aafc500ce58550d4cf016f2f36aead374b6.scope: Deactivated successfully.
Jan 30 23:19:30 np0005603435 podman[91311]: 2026-01-31 04:19:29.971044224 +0000 UTC m=+0.038022807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:30 np0005603435 podman[91311]: 2026-01-31 04:19:30.081027222 +0000 UTC m=+0.148005785 container create 38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be (image=quay.io/ceph/ceph:v20, name=busy_sammet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:19:30 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1854624934' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 30 23:19:30 np0005603435 systemd[1]: Started libpod-conmon-38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be.scope.
Jan 30 23:19:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52a79c5de4c3adac9c3607bdda4d114039e59d995739b88755cb6e6f8e4f60c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52a79c5de4c3adac9c3607bdda4d114039e59d995739b88755cb6e6f8e4f60c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:30 np0005603435 podman[91331]: 2026-01-31 04:19:30.249192298 +0000 UTC m=+0.217122657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:30 np0005603435 podman[91331]: 2026-01-31 04:19:30.467455078 +0000 UTC m=+0.435385427 container create 52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:19:30 np0005603435 podman[91311]: 2026-01-31 04:19:30.487655201 +0000 UTC m=+0.554633794 container init 38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be (image=quay.io/ceph/ceph:v20, name=busy_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:19:30 np0005603435 podman[91311]: 2026-01-31 04:19:30.497359099 +0000 UTC m=+0.564337672 container start 38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be (image=quay.io/ceph/ceph:v20, name=busy_sammet, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 30 23:19:30 np0005603435 systemd[1]: Started libpod-conmon-52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf.scope.
Jan 30 23:19:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8422c4e006898ad3387e8891560d482495eaf5395c10fde2e1a2941a490f1a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8422c4e006898ad3387e8891560d482495eaf5395c10fde2e1a2941a490f1a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8422c4e006898ad3387e8891560d482495eaf5395c10fde2e1a2941a490f1a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8422c4e006898ad3387e8891560d482495eaf5395c10fde2e1a2941a490f1a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8422c4e006898ad3387e8891560d482495eaf5395c10fde2e1a2941a490f1a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:30 np0005603435 podman[91311]: 2026-01-31 04:19:30.635459691 +0000 UTC m=+0.702438254 container attach 38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be (image=quay.io/ceph/ceph:v20, name=busy_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:30 np0005603435 podman[91331]: 2026-01-31 04:19:30.769108257 +0000 UTC m=+0.737038646 container init 52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:30 np0005603435 podman[91331]: 2026-01-31 04:19:30.779557371 +0000 UTC m=+0.747487740 container start 52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:19:30 np0005603435 podman[91331]: 2026-01-31 04:19:30.811454675 +0000 UTC m=+0.779385044 container attach 52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_robinson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 30 23:19:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2679445139' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 30 23:19:31 np0005603435 cool_robinson[91353]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:19:31 np0005603435 cool_robinson[91353]: --> All data devices are unavailable
Jan 30 23:19:31 np0005603435 systemd[1]: libpod-52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf.scope: Deactivated successfully.
Jan 30 23:19:31 np0005603435 podman[91331]: 2026-01-31 04:19:31.250862417 +0000 UTC m=+1.218792826 container died 52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 30 23:19:31 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2679445139' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 30 23:19:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2679445139' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 30 23:19:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 30 23:19:31 np0005603435 busy_sammet[91345]: enabled application 'rbd' on pool 'volumes'
Jan 30 23:19:31 np0005603435 systemd[1]: libpod-38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be.scope: Deactivated successfully.
Jan 30 23:19:31 np0005603435 podman[91311]: 2026-01-31 04:19:31.679451347 +0000 UTC m=+1.746429920 container died 38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be (image=quay.io/ceph/ceph:v20, name=busy_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b8422c4e006898ad3387e8891560d482495eaf5395c10fde2e1a2941a490f1a6-merged.mount: Deactivated successfully.
Jan 30 23:19:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 30 23:19:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e52a79c5de4c3adac9c3607bdda4d114039e59d995739b88755cb6e6f8e4f60c-merged.mount: Deactivated successfully.
Jan 30 23:19:32 np0005603435 podman[91311]: 2026-01-31 04:19:32.17624738 +0000 UTC m=+2.243225913 container remove 38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be (image=quay.io/ceph/ceph:v20, name=busy_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:19:32 np0005603435 systemd[1]: libpod-conmon-38465b23f93464db3ee0cb51e528e5f078b7d75a33320dbc56f2120b7043b4be.scope: Deactivated successfully.
Jan 30 23:19:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:32 np0005603435 podman[91331]: 2026-01-31 04:19:32.470492639 +0000 UTC m=+2.438422988 container remove 52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:32 np0005603435 python3[91445]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:32 np0005603435 systemd[1]: libpod-conmon-52c54de9d9084e5d5229ab6a9b94f2d1aa55117de778a3facedf14bdbec01fbf.scope: Deactivated successfully.
Jan 30 23:19:32 np0005603435 podman[91452]: 2026-01-31 04:19:32.646509804 +0000 UTC m=+0.108128610 container create d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4 (image=quay.io/ceph/ceph:v20, name=great_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 30 23:19:32 np0005603435 podman[91452]: 2026-01-31 04:19:32.562604534 +0000 UTC m=+0.024223320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:32 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2679445139' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 30 23:19:32 np0005603435 systemd[1]: Started libpod-conmon-d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4.scope.
Jan 30 23:19:32 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e7c70b107d0d4a7470c927a7cd535d6c4a4db7651b029456a41ec63a8fffb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e7c70b107d0d4a7470c927a7cd535d6c4a4db7651b029456a41ec63a8fffb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:32 np0005603435 podman[91522]: 2026-01-31 04:19:32.968318784 +0000 UTC m=+0.111122483 container create 7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_almeida, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:32 np0005603435 podman[91522]: 2026-01-31 04:19:32.881568314 +0000 UTC m=+0.024372113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:33 np0005603435 systemd[1]: Started libpod-conmon-7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e.scope.
Jan 30 23:19:33 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:33 np0005603435 podman[91452]: 2026-01-31 04:19:33.205455399 +0000 UTC m=+0.667074235 container init d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4 (image=quay.io/ceph/ceph:v20, name=great_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:33 np0005603435 podman[91452]: 2026-01-31 04:19:33.211453328 +0000 UTC m=+0.673072124 container start d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4 (image=quay.io/ceph/ceph:v20, name=great_jang, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:33 np0005603435 podman[91522]: 2026-01-31 04:19:33.254264396 +0000 UTC m=+0.397068145 container init 7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:19:33 np0005603435 podman[91522]: 2026-01-31 04:19:33.259219352 +0000 UTC m=+0.402023091 container start 7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:33 np0005603435 clever_almeida[91541]: 167 167
Jan 30 23:19:33 np0005603435 systemd[1]: libpod-7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e.scope: Deactivated successfully.
Jan 30 23:19:33 np0005603435 podman[91522]: 2026-01-31 04:19:33.441870859 +0000 UTC m=+0.584674598 container attach 7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:33 np0005603435 podman[91452]: 2026-01-31 04:19:33.477888831 +0000 UTC m=+0.939507637 container attach d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4 (image=quay.io/ceph/ceph:v20, name=great_jang, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:19:33 np0005603435 podman[91522]: 2026-01-31 04:19:33.496440299 +0000 UTC m=+0.639244038 container died 7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:19:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 30 23:19:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/659818268' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 30 23:19:33 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1e17013568a84a129be9adf8d3c56dd38791e831a87d19a04671e88a2365eb9a-merged.mount: Deactivated successfully.
Jan 30 23:19:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 30 23:19:33 np0005603435 podman[91522]: 2026-01-31 04:19:33.972323464 +0000 UTC m=+1.115127163 container remove 7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_almeida, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/659818268' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 30 23:19:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 30 23:19:33 np0005603435 great_jang[91530]: enabled application 'rbd' on pool 'backups'
Jan 30 23:19:33 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 30 23:19:33 np0005603435 systemd[1]: libpod-d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4.scope: Deactivated successfully.
Jan 30 23:19:33 np0005603435 podman[91452]: 2026-01-31 04:19:33.999130819 +0000 UTC m=+1.460749575 container died d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4 (image=quay.io/ceph/ceph:v20, name=great_jang, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:34 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/659818268' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 30 23:19:34 np0005603435 systemd[1]: var-lib-containers-storage-overlay-69e7c70b107d0d4a7470c927a7cd535d6c4a4db7651b029456a41ec63a8fffb2-merged.mount: Deactivated successfully.
Jan 30 23:19:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:34 np0005603435 podman[91452]: 2026-01-31 04:19:34.316853782 +0000 UTC m=+1.778472538 container remove d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4 (image=quay.io/ceph/ceph:v20, name=great_jang, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:34 np0005603435 systemd[1]: libpod-conmon-d80fa605fb1c9e488b2c54ad3fc57696bf06951b7be6c68ed688a1706b0219a4.scope: Deactivated successfully.
Jan 30 23:19:34 np0005603435 podman[91596]: 2026-01-31 04:19:34.391022972 +0000 UTC m=+0.348621837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:34 np0005603435 podman[91596]: 2026-01-31 04:19:34.541519109 +0000 UTC m=+0.499117974 container create b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:34 np0005603435 python3[91636]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:34 np0005603435 systemd[1]: Started libpod-conmon-b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5.scope.
Jan 30 23:19:34 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45310693b3d21bf9ba5324cabacc7bcc3e22dacb27738770f4d693724c49765d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45310693b3d21bf9ba5324cabacc7bcc3e22dacb27738770f4d693724c49765d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45310693b3d21bf9ba5324cabacc7bcc3e22dacb27738770f4d693724c49765d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45310693b3d21bf9ba5324cabacc7bcc3e22dacb27738770f4d693724c49765d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:34 np0005603435 podman[91637]: 2026-01-31 04:19:34.728451508 +0000 UTC m=+0.134715340 container create 877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d (image=quay.io/ceph/ceph:v20, name=condescending_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:19:34 np0005603435 podman[91637]: 2026-01-31 04:19:34.650957726 +0000 UTC m=+0.057221548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:34 np0005603435 systemd[1]: Started libpod-conmon-877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d.scope.
Jan 30 23:19:34 np0005603435 podman[91596]: 2026-01-31 04:19:34.872299592 +0000 UTC m=+0.829898547 container init b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:19:34 np0005603435 podman[91596]: 2026-01-31 04:19:34.88340054 +0000 UTC m=+0.840999415 container start b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:34 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c71cdf6e2c7178bc6ae3eb63de4638efb920ef689d0d940f86cd58dd57b110d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c71cdf6e2c7178bc6ae3eb63de4638efb920ef689d0d940f86cd58dd57b110d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:34 np0005603435 podman[91596]: 2026-01-31 04:19:34.996821381 +0000 UTC m=+0.954420286 container attach b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:35 np0005603435 podman[91637]: 2026-01-31 04:19:35.130009758 +0000 UTC m=+0.536273650 container init 877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d (image=quay.io/ceph/ceph:v20, name=condescending_feynman, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:35 np0005603435 podman[91637]: 2026-01-31 04:19:35.138495659 +0000 UTC m=+0.544759451 container start 877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d (image=quay.io/ceph/ceph:v20, name=condescending_feynman, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]: {
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:    "0": [
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:        {
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "devices": [
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "/dev/loop3"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            ],
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_name": "ceph_lv0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_size": "21470642176",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "name": "ceph_lv0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "tags": {
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.crush_device_class": "",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.encrypted": "0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osd_id": "0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.type": "block",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.vdo": "0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.with_tpm": "0"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            },
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "type": "block",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "vg_name": "ceph_vg0"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:        }
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:    ],
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:    "1": [
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:        {
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "devices": [
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "/dev/loop4"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            ],
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_name": "ceph_lv1",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_size": "21470642176",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "name": "ceph_lv1",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "tags": {
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.crush_device_class": "",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.encrypted": "0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osd_id": "1",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.type": "block",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.vdo": "0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.with_tpm": "0"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            },
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "type": "block",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "vg_name": "ceph_vg1"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:        }
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:    ],
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:    "2": [
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:        {
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "devices": [
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "/dev/loop5"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            ],
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_name": "ceph_lv2",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_size": "21470642176",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "name": "ceph_lv2",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "tags": {
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.crush_device_class": "",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.encrypted": "0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osd_id": "2",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.type": "block",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.vdo": "0",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:                "ceph.with_tpm": "0"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            },
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "type": "block",
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:            "vg_name": "ceph_vg2"
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:        }
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]:    ]
Jan 30 23:19:35 np0005603435 goofy_khayyam[91653]: }
Jan 30 23:19:35 np0005603435 systemd[1]: libpod-b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5.scope: Deactivated successfully.
Jan 30 23:19:35 np0005603435 podman[91637]: 2026-01-31 04:19:35.225626288 +0000 UTC m=+0.631890180 container attach 877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d (image=quay.io/ceph/ceph:v20, name=condescending_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:19:35 np0005603435 podman[91596]: 2026-01-31 04:19:35.24390936 +0000 UTC m=+1.201508255 container died b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:19:35 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/659818268' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 30 23:19:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-45310693b3d21bf9ba5324cabacc7bcc3e22dacb27738770f4d693724c49765d-merged.mount: Deactivated successfully.
Jan 30 23:19:35 np0005603435 podman[91668]: 2026-01-31 04:19:35.571042935 +0000 UTC m=+0.367720616 container remove b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_khayyam, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 30 23:19:35 np0005603435 systemd[1]: libpod-conmon-b2d94982db5b04cd5dd867dbe089f8a75410738aded664ed68fb3483e5568eb5.scope: Deactivated successfully.
Jan 30 23:19:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 30 23:19:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1501373139' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 30 23:19:35 np0005603435 systemd[1]: libpod-conmon-7f833672706a7abcc346e5e61247b45d14e8000c94033081c7de0b889db7250e.scope: Deactivated successfully.
Jan 30 23:19:36 np0005603435 podman[91766]: 2026-01-31 04:19:35.997114101 +0000 UTC m=+0.022348700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:36 np0005603435 podman[91766]: 2026-01-31 04:19:36.194090805 +0000 UTC m=+0.219325394 container create 1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chandrasekhar, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:19:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 30 23:19:36 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1501373139' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 30 23:19:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1501373139' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 30 23:19:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 30 23:19:36 np0005603435 condescending_feynman[91658]: enabled application 'rbd' on pool 'images'
Jan 30 23:19:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 30 23:19:36 np0005603435 systemd[1]: Started libpod-conmon-1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b.scope.
Jan 30 23:19:36 np0005603435 systemd[1]: libpod-877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d.scope: Deactivated successfully.
Jan 30 23:19:36 np0005603435 podman[91637]: 2026-01-31 04:19:36.300254382 +0000 UTC m=+1.706518164 container died 877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d (image=quay.io/ceph/ceph:v20, name=condescending_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:19:36 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8c71cdf6e2c7178bc6ae3eb63de4638efb920ef689d0d940f86cd58dd57b110d-merged.mount: Deactivated successfully.
Jan 30 23:19:36 np0005603435 podman[91637]: 2026-01-31 04:19:36.356527658 +0000 UTC m=+1.762791490 container remove 877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d (image=quay.io/ceph/ceph:v20, name=condescending_feynman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:36 np0005603435 systemd[1]: libpod-conmon-877958c6f7f7e95078e2f0d6f8f587b2ce9bb6199b4efda42acb231cb165fe8d.scope: Deactivated successfully.
Jan 30 23:19:36 np0005603435 podman[91766]: 2026-01-31 04:19:36.375484135 +0000 UTC m=+0.400718794 container init 1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 30 23:19:36 np0005603435 podman[91766]: 2026-01-31 04:19:36.3813017 +0000 UTC m=+0.406536309 container start 1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chandrasekhar, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 30 23:19:36 np0005603435 musing_chandrasekhar[91784]: 167 167
Jan 30 23:19:36 np0005603435 podman[91766]: 2026-01-31 04:19:36.384459477 +0000 UTC m=+0.409694096 container attach 1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chandrasekhar, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:36 np0005603435 systemd[1]: libpod-1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b.scope: Deactivated successfully.
Jan 30 23:19:36 np0005603435 conmon[91784]: conmon 1775c6368f64f5d641ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b.scope/container/memory.events
Jan 30 23:19:36 np0005603435 podman[91766]: 2026-01-31 04:19:36.3855003 +0000 UTC m=+0.410734899 container died 1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chandrasekhar, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay-fe4df078794d6961ef2c1a09cbb55428209a0da851079417af1bac76ac876bb4-merged.mount: Deactivated successfully.
Jan 30 23:19:36 np0005603435 podman[91766]: 2026-01-31 04:19:36.413448649 +0000 UTC m=+0.438683248 container remove 1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_chandrasekhar, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:36 np0005603435 systemd[1]: libpod-conmon-1775c6368f64f5d641ea76263dcc4cac88c03f21190f5ddca75d8dee1cb9a46b.scope: Deactivated successfully.
Jan 30 23:19:36 np0005603435 podman[91846]: 2026-01-31 04:19:36.523191132 +0000 UTC m=+0.021051902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:36 np0005603435 python3[91843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:36 np0005603435 podman[91846]: 2026-01-31 04:19:36.66301157 +0000 UTC m=+0.160872290 container create 8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:36 np0005603435 systemd[1]: Started libpod-conmon-8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f.scope.
Jan 30 23:19:36 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd19c908144e72c7e226036da10415c3de4fa12a92db4745aadfa2b37d935052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd19c908144e72c7e226036da10415c3de4fa12a92db4745aadfa2b37d935052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd19c908144e72c7e226036da10415c3de4fa12a92db4745aadfa2b37d935052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd19c908144e72c7e226036da10415c3de4fa12a92db4745aadfa2b37d935052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:36 np0005603435 podman[91860]: 2026-01-31 04:19:36.78892015 +0000 UTC m=+0.130495269 container create c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed (image=quay.io/ceph/ceph:v20, name=modest_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:19:36 np0005603435 podman[91860]: 2026-01-31 04:19:36.726832379 +0000 UTC m=+0.068407528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:36 np0005603435 podman[91846]: 2026-01-31 04:19:36.831634886 +0000 UTC m=+0.329495606 container init 8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ramanujan, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:19:36 np0005603435 podman[91846]: 2026-01-31 04:19:36.840510226 +0000 UTC m=+0.338370946 container start 8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ramanujan, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Jan 30 23:19:36 np0005603435 podman[91846]: 2026-01-31 04:19:36.867290171 +0000 UTC m=+0.365150941 container attach 8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ramanujan, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:19:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:19:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:19:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:19:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:19:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:19:36 np0005603435 systemd[1]: Started libpod-conmon-c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed.scope.
Jan 30 23:19:36 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623ce4735efc919191977743b3b7a7efc02bd299d8bc34d84a2f29becc6a8173/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623ce4735efc919191977743b3b7a7efc02bd299d8bc34d84a2f29becc6a8173/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:37 np0005603435 podman[91860]: 2026-01-31 04:19:37.033094256 +0000 UTC m=+0.374669375 container init c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed (image=quay.io/ceph/ceph:v20, name=modest_margulis, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:37 np0005603435 podman[91860]: 2026-01-31 04:19:37.039454693 +0000 UTC m=+0.381029812 container start c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed (image=quay.io/ceph/ceph:v20, name=modest_margulis, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:19:37 np0005603435 podman[91860]: 2026-01-31 04:19:37.067812401 +0000 UTC m=+0.409387490 container attach c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed (image=quay.io/ceph/ceph:v20, name=modest_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:19:37 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1501373139' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 30 23:19:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 30 23:19:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2756738876' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 30 23:19:37 np0005603435 lvm[91979]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:19:37 np0005603435 lvm[91979]: VG ceph_vg1 finished
Jan 30 23:19:37 np0005603435 lvm[91978]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:19:37 np0005603435 lvm[91978]: VG ceph_vg0 finished
Jan 30 23:19:37 np0005603435 lvm[91981]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:19:37 np0005603435 lvm[91981]: VG ceph_vg2 finished
Jan 30 23:19:37 np0005603435 festive_ramanujan[91874]: {}
Jan 30 23:19:37 np0005603435 systemd[1]: libpod-8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f.scope: Deactivated successfully.
Jan 30 23:19:37 np0005603435 podman[91846]: 2026-01-31 04:19:37.931879049 +0000 UTC m=+1.429739759 container died 8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ramanujan, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:37 np0005603435 systemd[1]: libpod-8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f.scope: Consumed 1.476s CPU time.
Jan 30 23:19:38 np0005603435 systemd[1]: var-lib-containers-storage-overlay-dd19c908144e72c7e226036da10415c3de4fa12a92db4745aadfa2b37d935052-merged.mount: Deactivated successfully.
Jan 30 23:19:38 np0005603435 podman[91846]: 2026-01-31 04:19:38.266797741 +0000 UTC m=+1.764658461 container remove 8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ramanujan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:19:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:38 np0005603435 systemd[1]: libpod-conmon-8085dd928b07a7399a4cf7244a3e2da916848ad95e09a93ff8081d121940886f.scope: Deactivated successfully.
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2756738876' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2756738876' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 30 23:19:38 np0005603435 modest_margulis[91881]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:38 np0005603435 systemd[1]: libpod-c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed.scope: Deactivated successfully.
Jan 30 23:19:38 np0005603435 podman[91860]: 2026-01-31 04:19:38.550253 +0000 UTC m=+1.891828079 container died c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed (image=quay.io/ceph/ceph:v20, name=modest_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:38 np0005603435 systemd[1]: var-lib-containers-storage-overlay-623ce4735efc919191977743b3b7a7efc02bd299d8bc34d84a2f29becc6a8173-merged.mount: Deactivated successfully.
Jan 30 23:19:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:39 np0005603435 podman[91860]: 2026-01-31 04:19:39.106128098 +0000 UTC m=+2.447703217 container remove c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed (image=quay.io/ceph/ceph:v20, name=modest_margulis, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:39 np0005603435 systemd[1]: libpod-conmon-c188ef1902a1dd71d0a1eab85806d15265e9afde621c656e9ae587d558fad8ed.scope: Deactivated successfully.
Jan 30 23:19:39 np0005603435 python3[92063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:39 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2756738876' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 30 23:19:39 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:39 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:39 np0005603435 podman[92064]: 2026-01-31 04:19:39.479934504 +0000 UTC m=+0.092661108 container create 6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640 (image=quay.io/ceph/ceph:v20, name=objective_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:19:39 np0005603435 podman[92064]: 2026-01-31 04:19:39.417880233 +0000 UTC m=+0.030606817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:39 np0005603435 systemd[1]: Started libpod-conmon-6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640.scope.
Jan 30 23:19:39 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794e3d8a8decd10e70b668b4c34be275466a8f3580abbb5a5388acedc5d66778/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794e3d8a8decd10e70b668b4c34be275466a8f3580abbb5a5388acedc5d66778/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:39 np0005603435 podman[92064]: 2026-01-31 04:19:39.622650654 +0000 UTC m=+0.235377268 container init 6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640 (image=quay.io/ceph/ceph:v20, name=objective_khorana, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:19:39 np0005603435 podman[92064]: 2026-01-31 04:19:39.632106877 +0000 UTC m=+0.244833491 container start 6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640 (image=quay.io/ceph/ceph:v20, name=objective_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:19:39 np0005603435 podman[92064]: 2026-01-31 04:19:39.727397931 +0000 UTC m=+0.340124515 container attach 6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640 (image=quay.io/ceph/ceph:v20, name=objective_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 30 23:19:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1009508441' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 30 23:19:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 30 23:19:40 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1009508441' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 30 23:19:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1009508441' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 30 23:19:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 30 23:19:40 np0005603435 objective_khorana[92079]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 30 23:19:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 30 23:19:40 np0005603435 systemd[1]: libpod-6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640.scope: Deactivated successfully.
Jan 30 23:19:40 np0005603435 podman[92064]: 2026-01-31 04:19:40.492880795 +0000 UTC m=+1.105607449 container died 6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640 (image=quay.io/ceph/ceph:v20, name=objective_khorana, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:19:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay-794e3d8a8decd10e70b668b4c34be275466a8f3580abbb5a5388acedc5d66778-merged.mount: Deactivated successfully.
Jan 30 23:19:40 np0005603435 podman[92064]: 2026-01-31 04:19:40.542849926 +0000 UTC m=+1.155576500 container remove 6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640 (image=quay.io/ceph/ceph:v20, name=objective_khorana, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:40 np0005603435 systemd[1]: libpod-conmon-6d06e452a5c88e82ca69c752bddc1cec5b90e65da3fee02d0fd56ce8fdd38640.scope: Deactivated successfully.
Jan 30 23:19:41 np0005603435 python3[92190]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:19:41 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1009508441' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 30 23:19:41 np0005603435 python3[92261]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833181.113723-36827-166136428860780/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:19:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:42 np0005603435 python3[92363]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:19:42 np0005603435 python3[92438]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833182.1254966-36841-84352156658501/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=2119c52adc018826de18dc472e8759a2341438da backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:19:43 np0005603435 python3[92488]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:43 np0005603435 podman[92489]: 2026-01-31 04:19:43.246751947 +0000 UTC m=+0.047403488 container create ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b (image=quay.io/ceph/ceph:v20, name=mystifying_kapitsa, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:43 np0005603435 systemd[1]: Started libpod-conmon-ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b.scope.
Jan 30 23:19:43 np0005603435 podman[92489]: 2026-01-31 04:19:43.225568072 +0000 UTC m=+0.026219693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61a5826727f22497b9c5fbe249f3c4eb78204eb9f2dfc85eb0f5f3f69312e232/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61a5826727f22497b9c5fbe249f3c4eb78204eb9f2dfc85eb0f5f3f69312e232/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61a5826727f22497b9c5fbe249f3c4eb78204eb9f2dfc85eb0f5f3f69312e232/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:43 np0005603435 podman[92489]: 2026-01-31 04:19:43.34438115 +0000 UTC m=+0.145032791 container init ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b (image=quay.io/ceph/ceph:v20, name=mystifying_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:19:43 np0005603435 podman[92489]: 2026-01-31 04:19:43.350983912 +0000 UTC m=+0.151635703 container start ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b (image=quay.io/ceph/ceph:v20, name=mystifying_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 30 23:19:43 np0005603435 podman[92489]: 2026-01-31 04:19:43.355344815 +0000 UTC m=+0.155996436 container attach ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b (image=quay.io/ceph/ceph:v20, name=mystifying_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 30 23:19:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1521723628' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 30 23:19:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1521723628' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 30 23:19:43 np0005603435 mystifying_kapitsa[92504]: 
Jan 30 23:19:43 np0005603435 mystifying_kapitsa[92504]: [global]
Jan 30 23:19:43 np0005603435 mystifying_kapitsa[92504]: #011fsid = 95d2f419-0dd0-56f2-a094-353f8c7597ed
Jan 30 23:19:43 np0005603435 mystifying_kapitsa[92504]: #011mon_host = 192.168.122.100
Jan 30 23:19:43 np0005603435 mystifying_kapitsa[92504]: #011rgw_keystone_api_version = 3
Jan 30 23:19:43 np0005603435 systemd[1]: libpod-ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b.scope: Deactivated successfully.
Jan 30 23:19:43 np0005603435 podman[92489]: 2026-01-31 04:19:43.854174062 +0000 UTC m=+0.654825633 container died ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b (image=quay.io/ceph/ceph:v20, name=mystifying_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 30 23:19:43 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1521723628' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 30 23:19:43 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1521723628' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 30 23:19:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay-61a5826727f22497b9c5fbe249f3c4eb78204eb9f2dfc85eb0f5f3f69312e232-merged.mount: Deactivated successfully.
Jan 30 23:19:43 np0005603435 podman[92489]: 2026-01-31 04:19:43.895273503 +0000 UTC m=+0.695925034 container remove ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b (image=quay.io/ceph/ceph:v20, name=mystifying_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:19:43 np0005603435 systemd[1]: libpod-conmon-ec97442178b8a9dd3b95bb08ea8ced7bb63864e3c9d2ecc515f825801104027b.scope: Deactivated successfully.
Jan 30 23:19:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:44 np0005603435 python3[92614]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:44 np0005603435 podman[92631]: 2026-01-31 04:19:44.317353504 +0000 UTC m=+0.071660678 container create 49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a (image=quay.io/ceph/ceph:v20, name=bold_yalow, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:44 np0005603435 systemd[1]: Started libpod-conmon-49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a.scope.
Jan 30 23:19:44 np0005603435 podman[92631]: 2026-01-31 04:19:44.288575427 +0000 UTC m=+0.042882651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:44 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c3e65d3fa8ff72700c336435acefc95f1c26874180cacec023b452e6b7572c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c3e65d3fa8ff72700c336435acefc95f1c26874180cacec023b452e6b7572c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c3e65d3fa8ff72700c336435acefc95f1c26874180cacec023b452e6b7572c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:44 np0005603435 podman[92631]: 2026-01-31 04:19:44.426219988 +0000 UTC m=+0.180527212 container init 49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a (image=quay.io/ceph/ceph:v20, name=bold_yalow, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:19:44 np0005603435 podman[92631]: 2026-01-31 04:19:44.434383813 +0000 UTC m=+0.188690977 container start 49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a (image=quay.io/ceph/ceph:v20, name=bold_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:19:44 np0005603435 podman[92631]: 2026-01-31 04:19:44.438765997 +0000 UTC m=+0.193073221 container attach 49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a (image=quay.io/ceph/ceph:v20, name=bold_yalow, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:44 np0005603435 podman[92678]: 2026-01-31 04:19:44.482323531 +0000 UTC m=+0.089805366 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:19:44 np0005603435 podman[92678]: 2026-01-31 04:19:44.629989658 +0000 UTC m=+0.237471423 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2017818150' entity='client.admin' 
Jan 30 23:19:45 np0005603435 bold_yalow[92675]: set ssl_option
Jan 30 23:19:45 np0005603435 systemd[1]: libpod-49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a.scope: Deactivated successfully.
Jan 30 23:19:45 np0005603435 podman[92631]: 2026-01-31 04:19:45.081299736 +0000 UTC m=+0.835606910 container died 49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a (image=quay.io/ceph/ceph:v20, name=bold_yalow, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:19:45 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d5c3e65d3fa8ff72700c336435acefc95f1c26874180cacec023b452e6b7572c-merged.mount: Deactivated successfully.
Jan 30 23:19:45 np0005603435 podman[92631]: 2026-01-31 04:19:45.131101253 +0000 UTC m=+0.885408427 container remove 49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a (image=quay.io/ceph/ceph:v20, name=bold_yalow, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:45 np0005603435 systemd[1]: libpod-conmon-49d307b754a864bf29f24b9951ed4ed6286714134bafe7f41ba4f6779e0c916a.scope: Deactivated successfully.
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:45 np0005603435 python3[92878]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:45 np0005603435 podman[92886]: 2026-01-31 04:19:45.55299534 +0000 UTC m=+0.064630097 container create e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b (image=quay.io/ceph/ceph:v20, name=hardcore_ishizaka, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:19:45 np0005603435 systemd[1]: Started libpod-conmon-e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b.scope.
Jan 30 23:19:45 np0005603435 podman[92886]: 2026-01-31 04:19:45.527401352 +0000 UTC m=+0.039036179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:45 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e48a921f076f4d17aa190bf442a57ad33e5803229ce826ed3ba074cd061f6f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e48a921f076f4d17aa190bf442a57ad33e5803229ce826ed3ba074cd061f6f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e48a921f076f4d17aa190bf442a57ad33e5803229ce826ed3ba074cd061f6f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:45 np0005603435 podman[92886]: 2026-01-31 04:19:45.679424821 +0000 UTC m=+0.191059598 container init e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b (image=quay.io/ceph/ceph:v20, name=hardcore_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:45 np0005603435 podman[92886]: 2026-01-31 04:19:45.690493839 +0000 UTC m=+0.202128606 container start e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b (image=quay.io/ceph/ceph:v20, name=hardcore_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:19:45 np0005603435 podman[92886]: 2026-01-31 04:19:45.695917755 +0000 UTC m=+0.207552522 container attach e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b (image=quay.io/ceph/ceph:v20, name=hardcore_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 30 23:19:45 np0005603435 podman[92986]: 2026-01-31 04:19:45.981857636 +0000 UTC m=+0.049332509 container create 4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:19:46 np0005603435 systemd[1]: Started libpod-conmon-4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872.scope.
Jan 30 23:19:46 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:46 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2017818150' entity='client.admin' 
Jan 30 23:19:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:19:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:19:46 np0005603435 podman[92986]: 2026-01-31 04:19:45.957381221 +0000 UTC m=+0.024856114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:46 np0005603435 podman[92986]: 2026-01-31 04:19:46.058745094 +0000 UTC m=+0.126219997 container init 4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 30 23:19:46 np0005603435 podman[92986]: 2026-01-31 04:19:46.066554762 +0000 UTC m=+0.134029665 container start 4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:46 np0005603435 epic_payne[93003]: 167 167
Jan 30 23:19:46 np0005603435 systemd[1]: libpod-4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872.scope: Deactivated successfully.
Jan 30 23:19:46 np0005603435 conmon[93003]: conmon 4a26669196e4de6822bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872.scope/container/memory.events
Jan 30 23:19:46 np0005603435 podman[92986]: 2026-01-31 04:19:46.072850367 +0000 UTC m=+0.140325290 container attach 4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:46 np0005603435 podman[92986]: 2026-01-31 04:19:46.075446372 +0000 UTC m=+0.142921325 container died 4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_payne, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:46 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:19:46 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 30 23:19:46 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 30 23:19:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 30 23:19:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c2e5ade1ec4fc06126bae2d48f6a5952355faf62be76848bc4ec980e8ac56093-merged.mount: Deactivated successfully.
Jan 30 23:19:46 np0005603435 hardcore_ishizaka[92939]: Scheduled rgw.rgw update...
Jan 30 23:19:46 np0005603435 podman[92986]: 2026-01-31 04:19:46.165870032 +0000 UTC m=+0.233344895 container remove 4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_payne, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:19:46 np0005603435 systemd[1]: libpod-e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b.scope: Deactivated successfully.
Jan 30 23:19:46 np0005603435 podman[92886]: 2026-01-31 04:19:46.17138571 +0000 UTC m=+0.683020447 container died e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b (image=quay.io/ceph/ceph:v20, name=hardcore_ishizaka, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:46 np0005603435 systemd[1]: libpod-conmon-4a26669196e4de6822bda01c443ede3e052d7ca93cd0a1bfa7fad8cba332d872.scope: Deactivated successfully.
Jan 30 23:19:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d5e48a921f076f4d17aa190bf442a57ad33e5803229ce826ed3ba074cd061f6f-merged.mount: Deactivated successfully.
Jan 30 23:19:46 np0005603435 podman[92886]: 2026-01-31 04:19:46.214609117 +0000 UTC m=+0.726243854 container remove e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b (image=quay.io/ceph/ceph:v20, name=hardcore_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:46 np0005603435 systemd[1]: libpod-conmon-e6ab2ded6679c3be69f9b5120d44de43e3ca4c5497b9349ccf5e47347efe862b.scope: Deactivated successfully.
Jan 30 23:19:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:46 np0005603435 podman[93042]: 2026-01-31 04:19:46.344924381 +0000 UTC m=+0.049784038 container create 7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 30 23:19:46 np0005603435 systemd[1]: Started libpod-conmon-7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2.scope.
Jan 30 23:19:46 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edb111e67acd468d790491af6815671114be016b93f367b82bbccda6ee183e17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edb111e67acd468d790491af6815671114be016b93f367b82bbccda6ee183e17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edb111e67acd468d790491af6815671114be016b93f367b82bbccda6ee183e17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edb111e67acd468d790491af6815671114be016b93f367b82bbccda6ee183e17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edb111e67acd468d790491af6815671114be016b93f367b82bbccda6ee183e17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:46 np0005603435 podman[93042]: 2026-01-31 04:19:46.321143311 +0000 UTC m=+0.026002978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:46 np0005603435 podman[93042]: 2026-01-31 04:19:46.433436439 +0000 UTC m=+0.138296066 container init 7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:46 np0005603435 podman[93042]: 2026-01-31 04:19:46.444925425 +0000 UTC m=+0.149785052 container start 7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 30 23:19:46 np0005603435 podman[93042]: 2026-01-31 04:19:46.528379645 +0000 UTC m=+0.233239272 container attach 7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:19:46 np0005603435 blissful_booth[93058]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:19:46 np0005603435 blissful_booth[93058]: --> All data devices are unavailable
Jan 30 23:19:46 np0005603435 systemd[1]: libpod-7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2.scope: Deactivated successfully.
Jan 30 23:19:46 np0005603435 podman[93042]: 2026-01-31 04:19:46.987199684 +0000 UTC m=+0.692059331 container died 7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay-edb111e67acd468d790491af6815671114be016b93f367b82bbccda6ee183e17-merged.mount: Deactivated successfully.
Jan 30 23:19:47 np0005603435 podman[93042]: 2026-01-31 04:19:47.04532239 +0000 UTC m=+0.750182017 container remove 7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 30 23:19:47 np0005603435 systemd[1]: libpod-conmon-7ddbcf7a31c627f73c6718e19a2f087bc11563fe36687e3b325f5d1c034169c2.scope: Deactivated successfully.
Jan 30 23:19:47 np0005603435 python3[93153]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:19:47 np0005603435 ceph-mon[75307]: Saving service rgw.rgw spec with placement compute-0
Jan 30 23:19:47 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:47 np0005603435 python3[93287]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833186.842799-36882-21327939794560/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:19:47 np0005603435 podman[93301]: 2026-01-31 04:19:47.499641332 +0000 UTC m=+0.045572228 container create 0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_wilson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:47 np0005603435 systemd[1]: Started libpod-conmon-0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b.scope.
Jan 30 23:19:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:47 np0005603435 podman[93301]: 2026-01-31 04:19:47.557215416 +0000 UTC m=+0.103146322 container init 0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 30 23:19:47 np0005603435 podman[93301]: 2026-01-31 04:19:47.565729989 +0000 UTC m=+0.111660925 container start 0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:19:47 np0005603435 dreamy_wilson[93342]: 167 167
Jan 30 23:19:47 np0005603435 systemd[1]: libpod-0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b.scope: Deactivated successfully.
Jan 30 23:19:47 np0005603435 podman[93301]: 2026-01-31 04:19:47.570356808 +0000 UTC m=+0.116287724 container attach 0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_wilson, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:47 np0005603435 podman[93301]: 2026-01-31 04:19:47.571619185 +0000 UTC m=+0.117550121 container died 0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:19:47 np0005603435 podman[93301]: 2026-01-31 04:19:47.478190972 +0000 UTC m=+0.024121888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay-6f20b3cd496423d6b2b8c2b29d76285aa9806664a6281c80695e8b5037599eb5-merged.mount: Deactivated successfully.
Jan 30 23:19:47 np0005603435 podman[93301]: 2026-01-31 04:19:47.612514742 +0000 UTC m=+0.158445628 container remove 0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_wilson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:19:47 np0005603435 systemd[1]: libpod-conmon-0c0eb1092200aff913d590057e05d4f2dd83b9493e6d126b59ef24555bdd2d9b.scope: Deactivated successfully.
Jan 30 23:19:47 np0005603435 podman[93383]: 2026-01-31 04:19:47.778725086 +0000 UTC m=+0.051027785 container create b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_greider, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:47 np0005603435 systemd[1]: Started libpod-conmon-b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4.scope.
Jan 30 23:19:47 np0005603435 podman[93383]: 2026-01-31 04:19:47.757389548 +0000 UTC m=+0.029692237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3f46daa6fa61305ef343f44b65f088d72f5ed52ddcd319434eaf1538e2573e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3f46daa6fa61305ef343f44b65f088d72f5ed52ddcd319434eaf1538e2573e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3f46daa6fa61305ef343f44b65f088d72f5ed52ddcd319434eaf1538e2573e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3f46daa6fa61305ef343f44b65f088d72f5ed52ddcd319434eaf1538e2573e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:47 np0005603435 podman[93383]: 2026-01-31 04:19:47.882398259 +0000 UTC m=+0.154700998 container init b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:19:47 np0005603435 podman[93383]: 2026-01-31 04:19:47.894524409 +0000 UTC m=+0.166827068 container start b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:19:47 np0005603435 podman[93383]: 2026-01-31 04:19:47.898375522 +0000 UTC m=+0.170678271 container attach b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 30 23:19:47 np0005603435 python3[93405]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:48 np0005603435 podman[93413]: 2026-01-31 04:19:48.012041059 +0000 UTC m=+0.066515177 container create 1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728 (image=quay.io/ceph/ceph:v20, name=agitated_taussig, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:19:48 np0005603435 systemd[1]: Started libpod-conmon-1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728.scope.
Jan 30 23:19:48 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323db1c1169ff5e09212e1c2f267975e20d321688c74ce5075301363f764fe90/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323db1c1169ff5e09212e1c2f267975e20d321688c74ce5075301363f764fe90/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323db1c1169ff5e09212e1c2f267975e20d321688c74ce5075301363f764fe90/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:48 np0005603435 podman[93413]: 2026-01-31 04:19:47.988292249 +0000 UTC m=+0.042766467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:48 np0005603435 podman[93413]: 2026-01-31 04:19:48.095767354 +0000 UTC m=+0.150241512 container init 1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728 (image=quay.io/ceph/ceph:v20, name=agitated_taussig, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:19:48 np0005603435 podman[93413]: 2026-01-31 04:19:48.105342089 +0000 UTC m=+0.159816247 container start 1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728 (image=quay.io/ceph/ceph:v20, name=agitated_taussig, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:48 np0005603435 podman[93413]: 2026-01-31 04:19:48.109636371 +0000 UTC m=+0.164110499 container attach 1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728 (image=quay.io/ceph/ceph:v20, name=agitated_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:48 np0005603435 frosty_greider[93408]: {
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:    "0": [
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:        {
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "devices": [
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "/dev/loop3"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            ],
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_name": "ceph_lv0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_size": "21470642176",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "name": "ceph_lv0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "tags": {
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.crush_device_class": "",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.encrypted": "0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osd_id": "0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.type": "block",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.vdo": "0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.with_tpm": "0"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            },
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "type": "block",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "vg_name": "ceph_vg0"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:        }
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:    ],
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:    "1": [
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:        {
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "devices": [
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "/dev/loop4"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            ],
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_name": "ceph_lv1",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_size": "21470642176",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "name": "ceph_lv1",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "tags": {
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.crush_device_class": "",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.encrypted": "0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osd_id": "1",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.type": "block",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.vdo": "0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.with_tpm": "0"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            },
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "type": "block",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "vg_name": "ceph_vg1"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:        }
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:    ],
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:    "2": [
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:        {
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "devices": [
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "/dev/loop5"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            ],
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_name": "ceph_lv2",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_size": "21470642176",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "name": "ceph_lv2",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "tags": {
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.crush_device_class": "",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.encrypted": "0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osd_id": "2",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.type": "block",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.vdo": "0",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:                "ceph.with_tpm": "0"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            },
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "type": "block",
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:            "vg_name": "ceph_vg2"
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:        }
Jan 30 23:19:48 np0005603435 frosty_greider[93408]:    ]
Jan 30 23:19:48 np0005603435 frosty_greider[93408]: }
Jan 30 23:19:48 np0005603435 systemd[1]: libpod-b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4.scope: Deactivated successfully.
Jan 30 23:19:48 np0005603435 podman[93383]: 2026-01-31 04:19:48.249783517 +0000 UTC m=+0.522086226 container died b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:19:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5c3f46daa6fa61305ef343f44b65f088d72f5ed52ddcd319434eaf1538e2573e-merged.mount: Deactivated successfully.
Jan 30 23:19:48 np0005603435 podman[93383]: 2026-01-31 04:19:48.322132158 +0000 UTC m=+0.594434847 container remove b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_greider, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:48 np0005603435 systemd[1]: libpod-conmon-b380492d18dc8de6fb45080493a0b7b9d8a7f93d0e651e60747b35ef55dec9a4.scope: Deactivated successfully.
Jan 30 23:19:48 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:19:48 np0005603435 ceph-mgr[75599]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 30 23:19:48 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0[75303]: 2026-01-31T04:19:48.635+0000 7f47fdcaa640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e2 new map
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-31T04:19:48:636324+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T04:19:48.635987+0000#012modified#0112026-01-31T04:19:48.635987+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 30 23:19:48 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 30 23:19:48 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 30 23:19:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:48 np0005603435 ceph-mgr[75599]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 30 23:19:48 np0005603435 systemd[1]: libpod-1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728.scope: Deactivated successfully.
Jan 30 23:19:48 np0005603435 podman[93413]: 2026-01-31 04:19:48.680692777 +0000 UTC m=+0.735166925 container died 1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728 (image=quay.io/ceph/ceph:v20, name=agitated_taussig, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-323db1c1169ff5e09212e1c2f267975e20d321688c74ce5075301363f764fe90-merged.mount: Deactivated successfully.
Jan 30 23:19:48 np0005603435 podman[93413]: 2026-01-31 04:19:48.734135863 +0000 UTC m=+0.788609991 container remove 1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728 (image=quay.io/ceph/ceph:v20, name=agitated_taussig, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 30 23:19:48 np0005603435 systemd[1]: libpod-conmon-1c0a5cd6a349660b09064d4d5039b46c992606c608990a80bf2b9be2e994c728.scope: Deactivated successfully.
Jan 30 23:19:48 np0005603435 podman[93543]: 2026-01-31 04:19:48.887652015 +0000 UTC m=+0.061792846 container create 8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:48 np0005603435 systemd[1]: Started libpod-conmon-8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1.scope.
Jan 30 23:19:48 np0005603435 podman[93543]: 2026-01-31 04:19:48.86177108 +0000 UTC m=+0.035911991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:48 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:48 np0005603435 podman[93543]: 2026-01-31 04:19:48.985631426 +0000 UTC m=+0.159772267 container init 8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:19:48 np0005603435 podman[93543]: 2026-01-31 04:19:48.993447603 +0000 UTC m=+0.167588424 container start 8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:19:48 np0005603435 podman[93543]: 2026-01-31 04:19:48.996991229 +0000 UTC m=+0.171132050 container attach 8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mayer, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:19:48 np0005603435 adoring_mayer[93585]: 167 167
Jan 30 23:19:48 np0005603435 systemd[1]: libpod-8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1.scope: Deactivated successfully.
Jan 30 23:19:49 np0005603435 podman[93543]: 2026-01-31 04:19:49.000122236 +0000 UTC m=+0.174263067 container died 8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:49 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b94ad289804b15c939ae603b4a37db44bf9c11463eff3e9804c0655023628bce-merged.mount: Deactivated successfully.
Jan 30 23:19:49 np0005603435 podman[93543]: 2026-01-31 04:19:49.039637474 +0000 UTC m=+0.213778295 container remove 8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mayer, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:49 np0005603435 systemd[1]: libpod-conmon-8811eb78fbd87bba85c71729a481313734aca7940e40b7917fe7cf154f96e9e1.scope: Deactivated successfully.
Jan 30 23:19:49 np0005603435 python3[93582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:49 np0005603435 podman[93604]: 2026-01-31 04:19:49.134074599 +0000 UTC m=+0.050437863 container create 20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963 (image=quay.io/ceph/ceph:v20, name=cool_tu, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:19:49 np0005603435 podman[93619]: 2026-01-31 04:19:49.174334892 +0000 UTC m=+0.061946579 container create 828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldwasser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:49 np0005603435 systemd[1]: Started libpod-conmon-20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963.scope.
Jan 30 23:19:49 np0005603435 podman[93604]: 2026-01-31 04:19:49.113946507 +0000 UTC m=+0.030309801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:49 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e81b4ca74d784b5682f800d731988be85b8fefbd91dc6f9394fde31e91fa307/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e81b4ca74d784b5682f800d731988be85b8fefbd91dc6f9394fde31e91fa307/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e81b4ca74d784b5682f800d731988be85b8fefbd91dc6f9394fde31e91fa307/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:49 np0005603435 systemd[1]: Started libpod-conmon-828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa.scope.
Jan 30 23:19:49 np0005603435 podman[93604]: 2026-01-31 04:19:49.226582452 +0000 UTC m=+0.142945736 container init 20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963 (image=quay.io/ceph/ceph:v20, name=cool_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:19:49 np0005603435 podman[93604]: 2026-01-31 04:19:49.231207821 +0000 UTC m=+0.147571115 container start 20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963 (image=quay.io/ceph/ceph:v20, name=cool_tu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:49 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/358ff0b53f5bcc713742e1a9f8c83d4214330cc53b58c95066e8e4344aefe589/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/358ff0b53f5bcc713742e1a9f8c83d4214330cc53b58c95066e8e4344aefe589/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/358ff0b53f5bcc713742e1a9f8c83d4214330cc53b58c95066e8e4344aefe589/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/358ff0b53f5bcc713742e1a9f8c83d4214330cc53b58c95066e8e4344aefe589/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:49 np0005603435 podman[93604]: 2026-01-31 04:19:49.235439122 +0000 UTC m=+0.151802386 container attach 20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963 (image=quay.io/ceph/ceph:v20, name=cool_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:19:49 np0005603435 podman[93619]: 2026-01-31 04:19:49.153345472 +0000 UTC m=+0.040957159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:49 np0005603435 podman[93619]: 2026-01-31 04:19:49.25026115 +0000 UTC m=+0.137872917 container init 828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:19:49 np0005603435 podman[93619]: 2026-01-31 04:19:49.257664869 +0000 UTC m=+0.145276546 container start 828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:49 np0005603435 podman[93619]: 2026-01-31 04:19:49.263013144 +0000 UTC m=+0.150624831 container attach 828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:49 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 30 23:19:49 np0005603435 ceph-mgr[75599]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 30 23:19:49 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 30 23:19:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:49 np0005603435 cool_tu[93638]: Scheduled mds.cephfs update...
Jan 30 23:19:49 np0005603435 systemd[1]: libpod-20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963.scope: Deactivated successfully.
Jan 30 23:19:49 np0005603435 podman[93604]: 2026-01-31 04:19:49.66393714 +0000 UTC m=+0.580300404 container died 20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963 (image=quay.io/ceph/ceph:v20, name=cool_tu, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:19:49 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0e81b4ca74d784b5682f800d731988be85b8fefbd91dc6f9394fde31e91fa307-merged.mount: Deactivated successfully.
Jan 30 23:19:49 np0005603435 podman[93604]: 2026-01-31 04:19:49.70452999 +0000 UTC m=+0.620893254 container remove 20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963 (image=quay.io/ceph/ceph:v20, name=cool_tu, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:49 np0005603435 systemd[1]: libpod-conmon-20dfb4b826c56e678ffa1f63523345818744ea993dc1a7702b6239796bb88963.scope: Deactivated successfully.
Jan 30 23:19:49 np0005603435 lvm[93756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:19:49 np0005603435 lvm[93757]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:19:49 np0005603435 lvm[93756]: VG ceph_vg0 finished
Jan 30 23:19:49 np0005603435 lvm[93757]: VG ceph_vg1 finished
Jan 30 23:19:49 np0005603435 lvm[93759]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:19:49 np0005603435 lvm[93759]: VG ceph_vg2 finished
Jan 30 23:19:50 np0005603435 xenodochial_goldwasser[93644]: {}
Jan 30 23:19:50 np0005603435 systemd[1]: libpod-828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa.scope: Deactivated successfully.
Jan 30 23:19:50 np0005603435 systemd[1]: libpod-828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa.scope: Consumed 1.195s CPU time.
Jan 30 23:19:50 np0005603435 podman[93619]: 2026-01-31 04:19:50.060266359 +0000 UTC m=+0.947878026 container died 828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:19:50 np0005603435 systemd[1]: var-lib-containers-storage-overlay-358ff0b53f5bcc713742e1a9f8c83d4214330cc53b58c95066e8e4344aefe589-merged.mount: Deactivated successfully.
Jan 30 23:19:50 np0005603435 podman[93619]: 2026-01-31 04:19:50.102906643 +0000 UTC m=+0.990518350 container remove 828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldwasser, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 30 23:19:50 np0005603435 systemd[1]: libpod-conmon-828e29940a371315a59eba67a89f69c10ae59cf96c0142705600c90d7356abfa.scope: Deactivated successfully.
Jan 30 23:19:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:50 np0005603435 ceph-mon[75307]: Saving service mds.cephfs spec with placement compute-0
Jan 30 23:19:50 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:50 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:50 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:50 np0005603435 python3[93924]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 30 23:19:50 np0005603435 podman[94011]: 2026-01-31 04:19:50.815279538 +0000 UTC m=+0.056754638 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:19:50 np0005603435 podman[94011]: 2026-01-31 04:19:50.915644071 +0000 UTC m=+0.157119171 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:50 np0005603435 python3[94058]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833190.36724-36930-20759819653850/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=554c90c74907bf5b649f3d413acf0f1f5c4c4df0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: Saving service mds.cephfs spec with placement compute-0
Jan 30 23:19:51 np0005603435 python3[94215]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:19:51 np0005603435 podman[94241]: 2026-01-31 04:19:51.424539133 +0000 UTC m=+0.038342443 container create 3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db (image=quay.io/ceph/ceph:v20, name=elegant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:51 np0005603435 systemd[1]: Started libpod-conmon-3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db.scope.
Jan 30 23:19:51 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:51 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d172a3f9174f43f5e43f0238c11452927ce99d84d4a69a238ce8603aad2017bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:51 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d172a3f9174f43f5e43f0238c11452927ce99d84d4a69a238ce8603aad2017bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:51 np0005603435 podman[94241]: 2026-01-31 04:19:51.40853363 +0000 UTC m=+0.022336960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:51 np0005603435 podman[94241]: 2026-01-31 04:19:51.515151626 +0000 UTC m=+0.128954956 container init 3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db (image=quay.io/ceph/ceph:v20, name=elegant_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:19:51 np0005603435 podman[94241]: 2026-01-31 04:19:51.524921735 +0000 UTC m=+0.138725085 container start 3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db (image=quay.io/ceph/ceph:v20, name=elegant_hypatia, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:51 np0005603435 podman[94241]: 2026-01-31 04:19:51.528422651 +0000 UTC m=+0.142225961 container attach 3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db (image=quay.io/ceph/ceph:v20, name=elegant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 30 23:19:51 np0005603435 podman[94343]: 2026-01-31 04:19:51.767138339 +0000 UTC m=+0.039796974 container create 45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:19:51 np0005603435 systemd[1]: Started libpod-conmon-45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869.scope.
Jan 30 23:19:51 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:51 np0005603435 podman[94343]: 2026-01-31 04:19:51.747875386 +0000 UTC m=+0.020534041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:51 np0005603435 podman[94343]: 2026-01-31 04:19:51.846781207 +0000 UTC m=+0.119439852 container init 45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:19:51 np0005603435 podman[94343]: 2026-01-31 04:19:51.852404458 +0000 UTC m=+0.125063133 container start 45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:51 np0005603435 fervent_chatterjee[94359]: 167 167
Jan 30 23:19:51 np0005603435 systemd[1]: libpod-45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869.scope: Deactivated successfully.
Jan 30 23:19:51 np0005603435 podman[94343]: 2026-01-31 04:19:51.857022777 +0000 UTC m=+0.129681412 container attach 45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:19:51 np0005603435 podman[94343]: 2026-01-31 04:19:51.857326193 +0000 UTC m=+0.129984828 container died 45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:19:51 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1583d73b0e190efdf5542029417c094b91dc9fe442190d1eddab225699456a2a-merged.mount: Deactivated successfully.
Jan 30 23:19:51 np0005603435 podman[94343]: 2026-01-31 04:19:51.893208563 +0000 UTC m=+0.165867198 container remove 45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatterjee, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:51 np0005603435 systemd[1]: libpod-conmon-45ff0d658cd953d55572c21a0280e75e071fefc7eae3a12131a52695b7c36869.scope: Deactivated successfully.
Jan 30 23:19:52 np0005603435 podman[94383]: 2026-01-31 04:19:52.062499343 +0000 UTC m=+0.097899770 container create d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mendel, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:19:52 np0005603435 podman[94383]: 2026-01-31 04:19:51.9868192 +0000 UTC m=+0.022219727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/274133158' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/274133158' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 30 23:19:52 np0005603435 systemd[1]: Started libpod-conmon-d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b.scope.
Jan 30 23:19:52 np0005603435 systemd[1]: libpod-3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db.scope: Deactivated successfully.
Jan 30 23:19:52 np0005603435 podman[94241]: 2026-01-31 04:19:52.1355592 +0000 UTC m=+0.749362520 container died 3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db (image=quay.io/ceph/ceph:v20, name=elegant_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:19:52 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5629fe904ca7b1bd8663d7780c781ceed17648b30ecfee6db45d8c00a1835edc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5629fe904ca7b1bd8663d7780c781ceed17648b30ecfee6db45d8c00a1835edc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5629fe904ca7b1bd8663d7780c781ceed17648b30ecfee6db45d8c00a1835edc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5629fe904ca7b1bd8663d7780c781ceed17648b30ecfee6db45d8c00a1835edc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5629fe904ca7b1bd8663d7780c781ceed17648b30ecfee6db45d8c00a1835edc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:52 np0005603435 podman[94383]: 2026-01-31 04:19:52.256968823 +0000 UTC m=+0.292369280 container init d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mendel, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:19:52 np0005603435 podman[94383]: 2026-01-31 04:19:52.266710302 +0000 UTC m=+0.302110749 container start d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mendel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:19:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:52 np0005603435 podman[94383]: 2026-01-31 04:19:52.28293998 +0000 UTC m=+0.318340437 container attach d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mendel, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:19:52 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d172a3f9174f43f5e43f0238c11452927ce99d84d4a69a238ce8603aad2017bc-merged.mount: Deactivated successfully.
Jan 30 23:19:52 np0005603435 podman[94241]: 2026-01-31 04:19:52.381752939 +0000 UTC m=+0.995556239 container remove 3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db (image=quay.io/ceph/ceph:v20, name=elegant_hypatia, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:19:52 np0005603435 systemd[1]: libpod-conmon-3dbb83dcb8ffbb4b6492c761d6ead587ab6f4e7935d3e03268a6d023a195f7db.scope: Deactivated successfully.
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/274133158' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 30 23:19:52 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/274133158' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 30 23:19:52 np0005603435 quizzical_mendel[94401]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:19:52 np0005603435 quizzical_mendel[94401]: --> All data devices are unavailable
Jan 30 23:19:52 np0005603435 systemd[1]: libpod-d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b.scope: Deactivated successfully.
Jan 30 23:19:52 np0005603435 podman[94383]: 2026-01-31 04:19:52.760698905 +0000 UTC m=+0.796099342 container died d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:19:52 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5629fe904ca7b1bd8663d7780c781ceed17648b30ecfee6db45d8c00a1835edc-merged.mount: Deactivated successfully.
Jan 30 23:19:52 np0005603435 podman[94383]: 2026-01-31 04:19:52.802474701 +0000 UTC m=+0.837875128 container remove d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mendel, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:52 np0005603435 systemd[1]: libpod-conmon-d3922023abc254345ae7b1fa8af8bb60bd33785a028365c9dee3d7776ad47b7b.scope: Deactivated successfully.
Jan 30 23:19:53 np0005603435 python3[94498]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:53 np0005603435 podman[94525]: 2026-01-31 04:19:53.086156133 +0000 UTC m=+0.045526848 container create 63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816 (image=quay.io/ceph/ceph:v20, name=busy_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:19:53 np0005603435 systemd[1]: Started libpod-conmon-63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816.scope.
Jan 30 23:19:53 np0005603435 podman[94525]: 2026-01-31 04:19:53.066142333 +0000 UTC m=+0.025513078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f860cb46d86912d71eb0907ba190ea3bd898a1f7b9394b8809fae12d7aef97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f860cb46d86912d71eb0907ba190ea3bd898a1f7b9394b8809fae12d7aef97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:53 np0005603435 podman[94525]: 2026-01-31 04:19:53.178492152 +0000 UTC m=+0.137862887 container init 63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816 (image=quay.io/ceph/ceph:v20, name=busy_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:53 np0005603435 podman[94525]: 2026-01-31 04:19:53.183648963 +0000 UTC m=+0.143019688 container start 63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816 (image=quay.io/ceph/ceph:v20, name=busy_poincare, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:53 np0005603435 podman[94525]: 2026-01-31 04:19:53.188120219 +0000 UTC m=+0.147490954 container attach 63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816 (image=quay.io/ceph/ceph:v20, name=busy_poincare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 30 23:19:53 np0005603435 podman[94557]: 2026-01-31 04:19:53.210648372 +0000 UTC m=+0.040231054 container create d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 30 23:19:53 np0005603435 systemd[1]: Started libpod-conmon-d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73.scope.
Jan 30 23:19:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:53 np0005603435 podman[94557]: 2026-01-31 04:19:53.276131636 +0000 UTC m=+0.105714338 container init d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 30 23:19:53 np0005603435 podman[94557]: 2026-01-31 04:19:53.28233959 +0000 UTC m=+0.111922272 container start d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:19:53 np0005603435 podman[94557]: 2026-01-31 04:19:53.285720552 +0000 UTC m=+0.115303234 container attach d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:19:53 np0005603435 optimistic_chatterjee[94573]: 167 167
Jan 30 23:19:53 np0005603435 systemd[1]: libpod-d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73.scope: Deactivated successfully.
Jan 30 23:19:53 np0005603435 podman[94557]: 2026-01-31 04:19:53.287809507 +0000 UTC m=+0.117392189 container died d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 30 23:19:53 np0005603435 podman[94557]: 2026-01-31 04:19:53.197358947 +0000 UTC m=+0.026941669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:53 np0005603435 systemd[1]: var-lib-containers-storage-overlay-04cb4d42a2ab66c5770899f26716b314c0461e78276f7a822a6749da02ab14ea-merged.mount: Deactivated successfully.
Jan 30 23:19:53 np0005603435 podman[94557]: 2026-01-31 04:19:53.322793237 +0000 UTC m=+0.152375929 container remove d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:19:53 np0005603435 systemd[1]: libpod-conmon-d2a9d2bd4508cf80f096928338a092ea2b5ebb7bbdb5872d6a4639c715a1df73.scope: Deactivated successfully.
Jan 30 23:19:53 np0005603435 podman[94617]: 2026-01-31 04:19:53.453759655 +0000 UTC m=+0.049034462 container create baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:53 np0005603435 systemd[1]: Started libpod-conmon-baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d.scope.
Jan 30 23:19:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:53 np0005603435 podman[94617]: 2026-01-31 04:19:53.430322923 +0000 UTC m=+0.025597810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad773045d377abc646ae49b37b07c40b0cc9e6aa82d0bc20d09d593e63dcd00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad773045d377abc646ae49b37b07c40b0cc9e6aa82d0bc20d09d593e63dcd00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad773045d377abc646ae49b37b07c40b0cc9e6aa82d0bc20d09d593e63dcd00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad773045d377abc646ae49b37b07c40b0cc9e6aa82d0bc20d09d593e63dcd00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:53 np0005603435 podman[94617]: 2026-01-31 04:19:53.541089978 +0000 UTC m=+0.136364815 container init baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:19:53 np0005603435 podman[94617]: 2026-01-31 04:19:53.548204301 +0000 UTC m=+0.143479138 container start baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_khayyam, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:53 np0005603435 podman[94617]: 2026-01-31 04:19:53.552771139 +0000 UTC m=+0.148045976 container attach baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:19:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 30 23:19:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2939560287' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 30 23:19:53 np0005603435 busy_poincare[94542]: 
Jan 30 23:19:53 np0005603435 busy_poincare[94542]: {"fsid":"95d2f419-0dd0-56f2-a094-353f8c7597ed","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":125,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":34,"num_osds":3,"num_up_osds":3,"osd_up_since":1769833159,"num_in_osds":3,"osd_in_since":1769833124,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83959808,"bytes_avail":64327966720,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-31T04:19:48:636324+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T04:19:08.261491+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 30 23:19:53 np0005603435 systemd[1]: libpod-63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816.scope: Deactivated successfully.
Jan 30 23:19:53 np0005603435 podman[94525]: 2026-01-31 04:19:53.675415298 +0000 UTC m=+0.634786033 container died 63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816 (image=quay.io/ceph/ceph:v20, name=busy_poincare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:53 np0005603435 podman[94525]: 2026-01-31 04:19:53.715192831 +0000 UTC m=+0.674563576 container remove 63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816 (image=quay.io/ceph/ceph:v20, name=busy_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 30 23:19:53 np0005603435 systemd[1]: libpod-conmon-63a2d424913754e7be21b56cce61f778e3ff4ad66c86c63bb389d3d076492816.scope: Deactivated successfully.
Jan 30 23:19:53 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d7f860cb46d86912d71eb0907ba190ea3bd898a1f7b9394b8809fae12d7aef97-merged.mount: Deactivated successfully.
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]: {
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:    "0": [
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:        {
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "devices": [
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "/dev/loop3"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            ],
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_name": "ceph_lv0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_size": "21470642176",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "name": "ceph_lv0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "tags": {
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.crush_device_class": "",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.encrypted": "0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osd_id": "0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.type": "block",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.vdo": "0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.with_tpm": "0"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            },
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "type": "block",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "vg_name": "ceph_vg0"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:        }
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:    ],
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:    "1": [
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:        {
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "devices": [
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "/dev/loop4"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            ],
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_name": "ceph_lv1",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_size": "21470642176",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "name": "ceph_lv1",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "tags": {
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.crush_device_class": "",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.encrypted": "0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osd_id": "1",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.type": "block",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.vdo": "0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.with_tpm": "0"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            },
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "type": "block",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "vg_name": "ceph_vg1"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:        }
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:    ],
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:    "2": [
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:        {
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "devices": [
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "/dev/loop5"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            ],
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_name": "ceph_lv2",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_size": "21470642176",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "name": "ceph_lv2",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "tags": {
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.cluster_name": "ceph",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.crush_device_class": "",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.encrypted": "0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.objectstore": "bluestore",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osd_id": "2",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.type": "block",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.vdo": "0",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:                "ceph.with_tpm": "0"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            },
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "type": "block",
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:            "vg_name": "ceph_vg2"
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:        }
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]:    ]
Jan 30 23:19:53 np0005603435 sweet_khayyam[94633]: }
Jan 30 23:19:53 np0005603435 systemd[1]: libpod-baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d.scope: Deactivated successfully.
Jan 30 23:19:53 np0005603435 podman[94617]: 2026-01-31 04:19:53.839689581 +0000 UTC m=+0.434964398 container died baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:53 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5ad773045d377abc646ae49b37b07c40b0cc9e6aa82d0bc20d09d593e63dcd00-merged.mount: Deactivated successfully.
Jan 30 23:19:53 np0005603435 podman[94617]: 2026-01-31 04:19:53.885117365 +0000 UTC m=+0.480392172 container remove baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_khayyam, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:53 np0005603435 systemd[1]: libpod-conmon-baf3e762e29fef4fc9c7cf734df89eb08fa51918ecf80f444f55e5cc7195099d.scope: Deactivated successfully.
Jan 30 23:19:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:54 np0005603435 python3[94695]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:54 np0005603435 podman[94746]: 2026-01-31 04:19:54.148200997 +0000 UTC m=+0.057210648 container create 4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7 (image=quay.io/ceph/ceph:v20, name=determined_turing, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 30 23:19:54 np0005603435 systemd[1]: Started libpod-conmon-4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7.scope.
Jan 30 23:19:54 np0005603435 podman[94746]: 2026-01-31 04:19:54.122205429 +0000 UTC m=+0.031215100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7f629c8dca99f843a78cf83f5d708e46eecd3e095689d993bb4f2c2a826e25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c7f629c8dca99f843a78cf83f5d708e46eecd3e095689d993bb4f2c2a826e25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:54 np0005603435 podman[94746]: 2026-01-31 04:19:54.2486432 +0000 UTC m=+0.157652881 container init 4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7 (image=quay.io/ceph/ceph:v20, name=determined_turing, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:54 np0005603435 podman[94746]: 2026-01-31 04:19:54.257072181 +0000 UTC m=+0.166081832 container start 4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7 (image=quay.io/ceph/ceph:v20, name=determined_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:19:54 np0005603435 podman[94746]: 2026-01-31 04:19:54.261452135 +0000 UTC m=+0.170461856 container attach 4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7 (image=quay.io/ceph/ceph:v20, name=determined_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:19:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:54 np0005603435 podman[94778]: 2026-01-31 04:19:54.382701115 +0000 UTC m=+0.063374550 container create b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:19:54 np0005603435 systemd[1]: Started libpod-conmon-b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150.scope.
Jan 30 23:19:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:54 np0005603435 podman[94778]: 2026-01-31 04:19:54.438214596 +0000 UTC m=+0.118888091 container init b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:19:54 np0005603435 podman[94778]: 2026-01-31 04:19:54.441772962 +0000 UTC m=+0.122446367 container start b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:54 np0005603435 objective_maxwell[94813]: 167 167
Jan 30 23:19:54 np0005603435 systemd[1]: libpod-b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150.scope: Deactivated successfully.
Jan 30 23:19:54 np0005603435 podman[94778]: 2026-01-31 04:19:54.353191062 +0000 UTC m=+0.033864557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:54 np0005603435 podman[94778]: 2026-01-31 04:19:54.446206457 +0000 UTC m=+0.126879882 container attach b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:54 np0005603435 podman[94778]: 2026-01-31 04:19:54.446900352 +0000 UTC m=+0.127573827 container died b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-406099563fe37cb2d93083aba71845527551d6d10d908e3567c267600164128a-merged.mount: Deactivated successfully.
Jan 30 23:19:54 np0005603435 podman[94778]: 2026-01-31 04:19:54.495657537 +0000 UTC m=+0.176330962 container remove b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_maxwell, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:19:54 np0005603435 systemd[1]: libpod-conmon-b0566fcbfb967322b5a65e1a50bf2107c47c49290d3e7a174a9cb06b12e0a150.scope: Deactivated successfully.
Jan 30 23:19:54 np0005603435 podman[94839]: 2026-01-31 04:19:54.621285431 +0000 UTC m=+0.047706154 container create 75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:19:54 np0005603435 systemd[1]: Started libpod-conmon-75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db.scope.
Jan 30 23:19:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f843e2a5257a51b2cc544ab2e862369d6e3accfc377d96b2e189f1f510bfa29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f843e2a5257a51b2cc544ab2e862369d6e3accfc377d96b2e189f1f510bfa29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f843e2a5257a51b2cc544ab2e862369d6e3accfc377d96b2e189f1f510bfa29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:54 np0005603435 podman[94839]: 2026-01-31 04:19:54.60584939 +0000 UTC m=+0.032270143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f843e2a5257a51b2cc544ab2e862369d6e3accfc377d96b2e189f1f510bfa29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:54 np0005603435 podman[94839]: 2026-01-31 04:19:54.70890585 +0000 UTC m=+0.135326603 container init 75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lewin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:54 np0005603435 podman[94839]: 2026-01-31 04:19:54.717858232 +0000 UTC m=+0.144279005 container start 75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lewin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:54 np0005603435 podman[94839]: 2026-01-31 04:19:54.722389519 +0000 UTC m=+0.148810322 container attach 75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:19:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3423046077' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:19:54 np0005603435 determined_turing[94762]: 
Jan 30 23:19:54 np0005603435 determined_turing[94762]: {"epoch":1,"fsid":"95d2f419-0dd0-56f2-a094-353f8c7597ed","modified":"2026-01-31T04:17:43.460314Z","created":"2026-01-31T04:17:43.460314Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 30 23:19:54 np0005603435 determined_turing[94762]: dumped monmap epoch 1
Jan 30 23:19:54 np0005603435 systemd[1]: libpod-4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7.scope: Deactivated successfully.
Jan 30 23:19:54 np0005603435 podman[94746]: 2026-01-31 04:19:54.78958916 +0000 UTC m=+0.698598771 container died 4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7 (image=quay.io/ceph/ceph:v20, name=determined_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:19:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5c7f629c8dca99f843a78cf83f5d708e46eecd3e095689d993bb4f2c2a826e25-merged.mount: Deactivated successfully.
Jan 30 23:19:54 np0005603435 podman[94746]: 2026-01-31 04:19:54.824035559 +0000 UTC m=+0.733045180 container remove 4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7 (image=quay.io/ceph/ceph:v20, name=determined_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:54 np0005603435 systemd[1]: libpod-conmon-4eb3028bb8d7f8438f021cc64f2e706fd8bee45189f6f23ccca3e9b0838c6ce7.scope: Deactivated successfully.
Jan 30 23:19:55 np0005603435 python3[94948]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:55 np0005603435 lvm[94973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:19:55 np0005603435 lvm[94973]: VG ceph_vg0 finished
Jan 30 23:19:55 np0005603435 lvm[94982]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:19:55 np0005603435 lvm[94982]: VG ceph_vg1 finished
Jan 30 23:19:55 np0005603435 lvm[94990]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:19:55 np0005603435 lvm[94990]: VG ceph_vg2 finished
Jan 30 23:19:55 np0005603435 podman[94971]: 2026-01-31 04:19:55.361236388 +0000 UTC m=+0.043731738 container create 6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33 (image=quay.io/ceph/ceph:v20, name=upbeat_matsumoto, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:19:55 np0005603435 systemd[1]: Started libpod-conmon-6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33.scope.
Jan 30 23:19:55 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80696e91ff887a8f82cfdb5264b4bceceb5f4f8027859c0ddf7e32dafa26cf30/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80696e91ff887a8f82cfdb5264b4bceceb5f4f8027859c0ddf7e32dafa26cf30/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:55 np0005603435 podman[94971]: 2026-01-31 04:19:55.434809706 +0000 UTC m=+0.117305146 container init 6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33 (image=quay.io/ceph/ceph:v20, name=upbeat_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:19:55 np0005603435 podman[94971]: 2026-01-31 04:19:55.340440162 +0000 UTC m=+0.022935542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:55 np0005603435 podman[94971]: 2026-01-31 04:19:55.440959868 +0000 UTC m=+0.123455218 container start 6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33 (image=quay.io/ceph/ceph:v20, name=upbeat_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:19:55 np0005603435 podman[94971]: 2026-01-31 04:19:55.44616783 +0000 UTC m=+0.128663230 container attach 6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33 (image=quay.io/ceph/ceph:v20, name=upbeat_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:19:55 np0005603435 nostalgic_lewin[94856]: {}
Jan 30 23:19:55 np0005603435 systemd[1]: libpod-75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db.scope: Deactivated successfully.
Jan 30 23:19:55 np0005603435 systemd[1]: libpod-75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db.scope: Consumed 1.075s CPU time.
Jan 30 23:19:55 np0005603435 podman[94839]: 2026-01-31 04:19:55.475128881 +0000 UTC m=+0.901549614 container died 75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:19:55 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5f843e2a5257a51b2cc544ab2e862369d6e3accfc377d96b2e189f1f510bfa29-merged.mount: Deactivated successfully.
Jan 30 23:19:55 np0005603435 podman[94839]: 2026-01-31 04:19:55.52127633 +0000 UTC m=+0.947697063 container remove 75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lewin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:19:55 np0005603435 systemd[1]: libpod-conmon-75a6436fce8632132cafc1387267bb091e7b4a3a16b5f551580b6f48432e10db.scope: Deactivated successfully.
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:55 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 2304570f-9e07-4aa8-9239-9ecab293ddba (Updating rgw.rgw deployment (+1 -> 1))
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zvcgqa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zvcgqa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zvcgqa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:55 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.zvcgqa on compute-0
Jan 30 23:19:55 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.zvcgqa on compute-0
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 30 23:19:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2052387212' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 30 23:19:55 np0005603435 upbeat_matsumoto[94995]: [client.openstack]
Jan 30 23:19:55 np0005603435 upbeat_matsumoto[94995]: #011key = AQBEgn1pAAAAABAAHVCE9hsqv+sN/h7zKRq/ww==
Jan 30 23:19:55 np0005603435 upbeat_matsumoto[94995]: #011caps mgr = "allow *"
Jan 30 23:19:55 np0005603435 upbeat_matsumoto[94995]: #011caps mon = "profile rbd"
Jan 30 23:19:55 np0005603435 upbeat_matsumoto[94995]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 30 23:19:55 np0005603435 systemd[1]: libpod-6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33.scope: Deactivated successfully.
Jan 30 23:19:55 np0005603435 podman[94971]: 2026-01-31 04:19:55.955934891 +0000 UTC m=+0.638430241 container died 6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33 (image=quay.io/ceph/ceph:v20, name=upbeat_matsumoto, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:55 np0005603435 systemd[1]: var-lib-containers-storage-overlay-80696e91ff887a8f82cfdb5264b4bceceb5f4f8027859c0ddf7e32dafa26cf30-merged.mount: Deactivated successfully.
Jan 30 23:19:55 np0005603435 podman[94971]: 2026-01-31 04:19:55.993036166 +0000 UTC m=+0.675531506 container remove 6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33 (image=quay.io/ceph/ceph:v20, name=upbeat_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:19:56 np0005603435 systemd[1]: libpod-conmon-6d55fc31562b0f501e45f5f74e2a3426a72acee77725872614d0ca2a7259cf33.scope: Deactivated successfully.
Jan 30 23:19:56 np0005603435 podman[95137]: 2026-01-31 04:19:56.115113914 +0000 UTC m=+0.052576098 container create a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:56 np0005603435 systemd[1]: Started libpod-conmon-a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e.scope.
Jan 30 23:19:56 np0005603435 podman[95137]: 2026-01-31 04:19:56.087905821 +0000 UTC m=+0.025368065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:56 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:56 np0005603435 podman[95137]: 2026-01-31 04:19:56.208020196 +0000 UTC m=+0.145482400 container init a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_blackwell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 30 23:19:56 np0005603435 podman[95137]: 2026-01-31 04:19:56.216180901 +0000 UTC m=+0.153643095 container start a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_blackwell, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:19:56 np0005603435 podman[95137]: 2026-01-31 04:19:56.220739559 +0000 UTC m=+0.158201813 container attach a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_blackwell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 30 23:19:56 np0005603435 competent_blackwell[95153]: 167 167
Jan 30 23:19:56 np0005603435 systemd[1]: libpod-a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e.scope: Deactivated successfully.
Jan 30 23:19:56 np0005603435 podman[95137]: 2026-01-31 04:19:56.222549258 +0000 UTC m=+0.160011442 container died a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:19:56 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7d0177dd345745fee361341ddbf6f69785a0fb781bde96209a28cb3df736a60b-merged.mount: Deactivated successfully.
Jan 30 23:19:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:56 np0005603435 podman[95137]: 2026-01-31 04:19:56.276200978 +0000 UTC m=+0.213663172 container remove a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_blackwell, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:56 np0005603435 systemd[1]: libpod-conmon-a49671ba99a9bf6e3a72d21b520fd5bf5fa307b029278c352e72de38f09da46e.scope: Deactivated successfully.
Jan 30 23:19:56 np0005603435 systemd[1]: Reloading.
Jan 30 23:19:56 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:19:56 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:19:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zvcgqa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 30 23:19:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zvcgqa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 30 23:19:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:56 np0005603435 ceph-mon[75307]: Deploying daemon rgw.rgw.compute-0.zvcgqa on compute-0
Jan 30 23:19:56 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/2052387212' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 30 23:19:56 np0005603435 systemd[1]: Reloading.
Jan 30 23:19:56 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:19:56 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:19:56 np0005603435 systemd[1]: Starting Ceph rgw.rgw.compute-0.zvcgqa for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:19:57 np0005603435 podman[95399]: 2026-01-31 04:19:57.213443025 +0000 UTC m=+0.045873125 container create 0162f99c311035fab5eea1272627af24382d58dd6731015a388e445fcff6b343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-rgw-rgw-compute-0-zvcgqa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb03c251ef8a5f225265f70d754ea4dd8afa4b0c2b3d79d78bf91e457dcffe6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb03c251ef8a5f225265f70d754ea4dd8afa4b0c2b3d79d78bf91e457dcffe6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb03c251ef8a5f225265f70d754ea4dd8afa4b0c2b3d79d78bf91e457dcffe6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb03c251ef8a5f225265f70d754ea4dd8afa4b0c2b3d79d78bf91e457dcffe6f/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.zvcgqa supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:57 np0005603435 podman[95399]: 2026-01-31 04:19:57.267121796 +0000 UTC m=+0.099551876 container init 0162f99c311035fab5eea1272627af24382d58dd6731015a388e445fcff6b343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-rgw-rgw-compute-0-zvcgqa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:19:57 np0005603435 podman[95399]: 2026-01-31 04:19:57.276122609 +0000 UTC m=+0.108552679 container start 0162f99c311035fab5eea1272627af24382d58dd6731015a388e445fcff6b343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-rgw-rgw-compute-0-zvcgqa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:19:57 np0005603435 bash[95399]: 0162f99c311035fab5eea1272627af24382d58dd6731015a388e445fcff6b343
Jan 30 23:19:57 np0005603435 podman[95399]: 2026-01-31 04:19:57.195598092 +0000 UTC m=+0.028028162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:57 np0005603435 systemd[1]: Started Ceph rgw.rgw.compute-0.zvcgqa for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:57 np0005603435 radosgw[95468]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:19:57 np0005603435 radosgw[95468]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 30 23:19:57 np0005603435 radosgw[95468]: framework: beast
Jan 30 23:19:57 np0005603435 radosgw[95468]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 30 23:19:57 np0005603435 radosgw[95468]: init_numa not setting numa affinity
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:57 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 2304570f-9e07-4aa8-9239-9ecab293ddba (Updating rgw.rgw deployment (+1 -> 1))
Jan 30 23:19:57 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 2304570f-9e07-4aa8-9239-9ecab293ddba (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Jan 30 23:19:57 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 30 23:19:57 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:57 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 4cccb19e-6a82-4e74-a3c4-ce2db4114df9 (Updating mds.cephfs deployment (+1 -> 1))
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xaqauc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xaqauc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xaqauc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:19:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:19:57 np0005603435 ceph-mgr[75599]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.xaqauc on compute-0
Jan 30 23:19:57 np0005603435 ceph-mgr[75599]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.xaqauc on compute-0
Jan 30 23:19:57 np0005603435 ansible-async_wrapper.py[95469]: Invoked with j266082459345 30 /home/zuul/.ansible/tmp/ansible-tmp-1769833196.9332352-37002-34057295444642/AnsiballZ_command.py _
Jan 30 23:19:57 np0005603435 ansible-async_wrapper.py[95531]: Starting module and watcher
Jan 30 23:19:57 np0005603435 ansible-async_wrapper.py[95531]: Start watching 95534 (30)
Jan 30 23:19:57 np0005603435 ansible-async_wrapper.py[95534]: Start module (95534)
Jan 30 23:19:57 np0005603435 ansible-async_wrapper.py[95469]: Return async_wrapper task started.
Jan 30 23:19:57 np0005603435 python3[95540]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:57 np0005603435 podman[95553]: 2026-01-31 04:19:57.669664288 +0000 UTC m=+0.052147019 container create 64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446 (image=quay.io/ceph/ceph:v20, name=quizzical_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 30 23:19:57 np0005603435 systemd[1]: Started libpod-conmon-64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446.scope.
Jan 30 23:19:57 np0005603435 podman[95553]: 2026-01-31 04:19:57.639949991 +0000 UTC m=+0.022432732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:57 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3735816c5d75de995cca16060d851765071c3c45957aac310ed800f86f2141f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3735816c5d75de995cca16060d851765071c3c45957aac310ed800f86f2141f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:57 np0005603435 podman[95553]: 2026-01-31 04:19:57.770590422 +0000 UTC m=+0.153073153 container init 64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446 (image=quay.io/ceph/ceph:v20, name=quizzical_lederberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:19:57 np0005603435 podman[95553]: 2026-01-31 04:19:57.780685779 +0000 UTC m=+0.163168500 container start 64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446 (image=quay.io/ceph/ceph:v20, name=quizzical_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:57 np0005603435 podman[95553]: 2026-01-31 04:19:57.784289746 +0000 UTC m=+0.166772447 container attach 64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446 (image=quay.io/ceph/ceph:v20, name=quizzical_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:19:57 np0005603435 podman[95614]: 2026-01-31 04:19:57.948712212 +0000 UTC m=+0.070291199 container create 111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:58 np0005603435 systemd[1]: Started libpod-conmon-111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b.scope.
Jan 30 23:19:58 np0005603435 podman[95614]: 2026-01-31 04:19:57.916122633 +0000 UTC m=+0.037701670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:58 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:58 np0005603435 podman[95614]: 2026-01-31 04:19:58.045376495 +0000 UTC m=+0.166955532 container init 111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:19:58 np0005603435 podman[95614]: 2026-01-31 04:19:58.055272347 +0000 UTC m=+0.176851334 container start 111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:19:58 np0005603435 hardcore_tu[95649]: 167 167
Jan 30 23:19:58 np0005603435 systemd[1]: libpod-111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b.scope: Deactivated successfully.
Jan 30 23:19:58 np0005603435 conmon[95649]: conmon 111e60922a4add70087a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b.scope/container/memory.events
Jan 30 23:19:58 np0005603435 podman[95614]: 2026-01-31 04:19:58.062520882 +0000 UTC m=+0.184099859 container attach 111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:58 np0005603435 podman[95614]: 2026-01-31 04:19:58.06335388 +0000 UTC m=+0.184932867 container died 111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b8983436ff6716cea697a3fb9d4471e329de3e7551ff723fefbba202b7fc7192-merged.mount: Deactivated successfully.
Jan 30 23:19:58 np0005603435 podman[95614]: 2026-01-31 04:19:58.110679045 +0000 UTC m=+0.232258002 container remove 111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 30 23:19:58 np0005603435 systemd[1]: libpod-conmon-111e60922a4add70087a71528c1e827c63f6167848f60bbb300ff645252e239b.scope: Deactivated successfully.
Jan 30 23:19:58 np0005603435 systemd[1]: Reloading.
Jan 30 23:19:58 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 30 23:19:58 np0005603435 quizzical_lederberg[95584]: 
Jan 30 23:19:58 np0005603435 quizzical_lederberg[95584]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 30 23:19:58 np0005603435 podman[95553]: 2026-01-31 04:19:58.243089544 +0000 UTC m=+0.625572245 container died 64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446 (image=quay.io/ceph/ceph:v20, name=quizzical_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:58 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:19:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:19:58 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: Saving service rgw.rgw spec with placement compute-0
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xaqauc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xaqauc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: Deploying daemon mds.cephfs.compute-0.xaqauc on compute-0
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 30 23:19:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1463060646' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 30 23:19:58 np0005603435 systemd[1]: libpod-64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446.scope: Deactivated successfully.
Jan 30 23:19:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b3735816c5d75de995cca16060d851765071c3c45957aac310ed800f86f2141f-merged.mount: Deactivated successfully.
Jan 30 23:19:58 np0005603435 podman[95553]: 2026-01-31 04:19:58.48352855 +0000 UTC m=+0.866011281 container remove 64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446 (image=quay.io/ceph/ceph:v20, name=quizzical_lederberg, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:19:58 np0005603435 systemd[1]: libpod-conmon-64d2353fea44c4438547c053a95099e7e1d939eadd853402641114450f205446.scope: Deactivated successfully.
Jan 30 23:19:58 np0005603435 systemd[1]: Reloading.
Jan 30 23:19:58 np0005603435 ansible-async_wrapper.py[95534]: Module complete (95534)
Jan 30 23:19:58 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:19:58 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:19:58 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 35 pg[8.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:19:58 np0005603435 systemd[1]: Starting Ceph mds.cephfs.compute-0.xaqauc for 95d2f419-0dd0-56f2-a094-353f8c7597ed...
Jan 30 23:19:58 np0005603435 python3[95805]: ansible-ansible.legacy.async_status Invoked with jid=j266082459345.95469 mode=status _async_dir=/root/.ansible_async
Jan 30 23:19:59 np0005603435 podman[95875]: 2026-01-31 04:19:59.026062874 +0000 UTC m=+0.043374711 container create 0802fb2aceef7e11874d5d1739820c3c6eb7f6cd178923a1a2eb2b5845f19662 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mds-cephfs-compute-0-xaqauc, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:19:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50dba728b543468c510341132f9f4e738c51edd83003dc88b5c8e371bf46a515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50dba728b543468c510341132f9f4e738c51edd83003dc88b5c8e371bf46a515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50dba728b543468c510341132f9f4e738c51edd83003dc88b5c8e371bf46a515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50dba728b543468c510341132f9f4e738c51edd83003dc88b5c8e371bf46a515/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.xaqauc supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:59 np0005603435 podman[95875]: 2026-01-31 04:19:59.003725655 +0000 UTC m=+0.021037602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:19:59 np0005603435 podman[95875]: 2026-01-31 04:19:59.100669634 +0000 UTC m=+0.117981481 container init 0802fb2aceef7e11874d5d1739820c3c6eb7f6cd178923a1a2eb2b5845f19662 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mds-cephfs-compute-0-xaqauc, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:19:59 np0005603435 podman[95875]: 2026-01-31 04:19:59.106847376 +0000 UTC m=+0.124159223 container start 0802fb2aceef7e11874d5d1739820c3c6eb7f6cd178923a1a2eb2b5845f19662 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mds-cephfs-compute-0-xaqauc, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:19:59 np0005603435 bash[95875]: 0802fb2aceef7e11874d5d1739820c3c6eb7f6cd178923a1a2eb2b5845f19662
Jan 30 23:19:59 np0005603435 systemd[1]: Started Ceph mds.cephfs.compute-0.xaqauc for 95d2f419-0dd0-56f2-a094-353f8c7597ed.
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: main not setting numa affinity
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: pidfile_write: ignore empty --pid-file
Jan 30 23:19:59 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mds-cephfs-compute-0-xaqauc[95918]: starting mds.cephfs.compute-0.xaqauc at 
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc Updating MDS map to version 2 from mon.0
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 python3[95915]: ansible-ansible.legacy.async_status Invoked with jid=j266082459345.95469 mode=cleanup _async_dir=/root/.ansible_async
Jan 30 23:19:59 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 4cccb19e-6a82-4e74-a3c4-ce2db4114df9 (Updating mds.cephfs deployment (+1 -> 1))
Jan 30 23:19:59 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 4cccb19e-6a82-4e74-a3c4-ce2db4114df9 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1463060646' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1463060646' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:19:59 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 36 pg[8.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e3 new map
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-31T04:19:59:387741+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T04:19:48.635987+0000#012modified#0112026-01-31T04:19:48.635987+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xaqauc{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/192309935,v1:192.168.122.100:6815/192309935] compat {c=[1],r=[1],i=[1fff]}]
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc Updating MDS map to version 3 from mon.0
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc Monitors have assigned me to become a standby
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/192309935,v1:192.168.122.100:6815/192309935] up:boot
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/192309935,v1:192.168.122.100:6815/192309935] as mds.0
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.xaqauc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.xaqauc"} v 0)
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.xaqauc"} : dispatch
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e3 all = 0
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e4 new map
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-31T04:19:59:398332+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T04:19:48.635987+0000#012modified#0112026-01-31T04:19:59.398322+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.xaqauc{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/192309935,v1:192.168.122.100:6815/192309935] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc Updating MDS map to version 4 from mon.0
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.xaqauc=up:creating}
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x1
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x100
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x600
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x601
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x602
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x603
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x604
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x605
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x606
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x607
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x608
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.cache creating system inode with ino:0x609
Jan 30 23:19:59 np0005603435 ceph-mds[95922]: mds.0.4 creating_done
Jan 30 23:19:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.xaqauc is now active in filesystem cephfs as rank 0
Jan 30 23:19:59 np0005603435 python3[96620]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:19:59 np0005603435 podman[96657]: 2026-01-31 04:19:59.720109987 +0000 UTC m=+0.045677921 container create 23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1 (image=quay.io/ceph/ceph:v20, name=pensive_williams, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:19:59 np0005603435 podman[96655]: 2026-01-31 04:19:59.727864623 +0000 UTC m=+0.056784059 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:19:59 np0005603435 systemd[1]: Started libpod-conmon-23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1.scope.
Jan 30 23:19:59 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:19:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641ad0bc2ab0cf4a86f036598db54f7950ffa804438f842624e55c033f600ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641ad0bc2ab0cf4a86f036598db54f7950ffa804438f842624e55c033f600ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:19:59 np0005603435 podman[96657]: 2026-01-31 04:19:59.699387062 +0000 UTC m=+0.024954996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:19:59 np0005603435 podman[96657]: 2026-01-31 04:19:59.810918644 +0000 UTC m=+0.136486568 container init 23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1 (image=quay.io/ceph/ceph:v20, name=pensive_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:19:59 np0005603435 podman[96657]: 2026-01-31 04:19:59.818121838 +0000 UTC m=+0.143689752 container start 23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1 (image=quay.io/ceph/ceph:v20, name=pensive_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:19:59 np0005603435 podman[96657]: 2026-01-31 04:19:59.821559872 +0000 UTC m=+0.147127796 container attach 23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1 (image=quay.io/ceph/ceph:v20, name=pensive_williams, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:19:59 np0005603435 podman[96655]: 2026-01-31 04:19:59.821630834 +0000 UTC m=+0.150550240 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:20:00 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 30 23:20:00 np0005603435 pensive_williams[96691]: 
Jan 30 23:20:00 np0005603435 pensive_williams[96691]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 30 23:20:00 np0005603435 podman[96657]: 2026-01-31 04:20:00.21275546 +0000 UTC m=+0.538323364 container died 23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1 (image=quay.io/ceph/ceph:v20, name=pensive_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:00 np0005603435 systemd[1]: libpod-23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1.scope: Deactivated successfully.
Jan 30 23:20:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f641ad0bc2ab0cf4a86f036598db54f7950ffa804438f842624e55c033f600ca-merged.mount: Deactivated successfully.
Jan 30 23:20:00 np0005603435 podman[96657]: 2026-01-31 04:20:00.259583564 +0000 UTC m=+0.585151508 container remove 23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1 (image=quay.io/ceph/ceph:v20, name=pensive_williams, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:20:00 np0005603435 systemd[1]: libpod-conmon-23d248996eab3817a5f7058af226d785a5591273cf8141624db90c52f6e0bcc1.scope: Deactivated successfully.
Jan 30 23:20:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v84: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 30 23:20:00 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 37 pg[9.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/1463060646' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: daemon mds.cephfs.compute-0.xaqauc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: Cluster is now healthy
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: daemon mds.cephfs.compute-0.xaqauc is now active in filesystem cephfs as rank 0
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e5 new map
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-31T04:20:00:404475+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T04:19:48.635987+0000#012modified#0112026-01-31T04:20:00.404471+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14253 members: 14253#012[mds.cephfs.compute-0.xaqauc{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/192309935,v1:192.168.122.100:6815/192309935] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 30 23:20:00 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc Updating MDS map to version 5 from mon.0
Jan 30 23:20:00 np0005603435 ceph-mds[95922]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 30 23:20:00 np0005603435 ceph-mds[95922]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 30 23:20:00 np0005603435 ceph-mds[95922]: mds.0.4 recovery_done -- successful recovery!
Jan 30 23:20:00 np0005603435 ceph-mds[95922]: mds.0.4 active_start
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/192309935,v1:192.168.122.100:6815/192309935] up:active
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.xaqauc=up:active}
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:20:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:20:01 np0005603435 podman[96980]: 2026-01-31 04:20:01.02514257 +0000 UTC m=+0.047122541 container create 1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rubin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:20:01 np0005603435 systemd[1]: Started libpod-conmon-1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def.scope.
Jan 30 23:20:01 np0005603435 python3[96969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:20:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:01 np0005603435 podman[96980]: 2026-01-31 04:20:01.009129187 +0000 UTC m=+0.031109178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:01 np0005603435 podman[96980]: 2026-01-31 04:20:01.114072327 +0000 UTC m=+0.136052318 container init 1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rubin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:20:01 np0005603435 podman[96980]: 2026-01-31 04:20:01.124428909 +0000 UTC m=+0.146408870 container start 1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rubin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:20:01 np0005603435 heuristic_rubin[96996]: 167 167
Jan 30 23:20:01 np0005603435 systemd[1]: libpod-1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def.scope: Deactivated successfully.
Jan 30 23:20:01 np0005603435 podman[96980]: 2026-01-31 04:20:01.13191455 +0000 UTC m=+0.153894581 container attach 1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rubin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:01 np0005603435 podman[96980]: 2026-01-31 04:20:01.132269837 +0000 UTC m=+0.154249808 container died 1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rubin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:20:01 np0005603435 podman[96999]: 2026-01-31 04:20:01.158145022 +0000 UTC m=+0.059909086 container create 755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde (image=quay.io/ceph/ceph:v20, name=silly_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:01 np0005603435 systemd[1]: Started libpod-conmon-755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde.scope.
Jan 30 23:20:01 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b3de8a0c8527c35b9fc015330c7d20a44e9f1edda85a3694878ee48fe449dbcf-merged.mount: Deactivated successfully.
Jan 30 23:20:01 np0005603435 podman[96980]: 2026-01-31 04:20:01.198956387 +0000 UTC m=+0.220936358 container remove 1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:20:01 np0005603435 systemd[1]: libpod-conmon-1c5b8cea750864b183ca623c2cd6c2a360b335b0413dd4ffc3fecd0d46cd1def.scope: Deactivated successfully.
Jan 30 23:20:01 np0005603435 podman[96999]: 2026-01-31 04:20:01.118381659 +0000 UTC m=+0.020145743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:20:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b110febc222292fb21880fd8181d67d39f9c62bc392e2441363790136e7e84/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b110febc222292fb21880fd8181d67d39f9c62bc392e2441363790136e7e84/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:01 np0005603435 podman[96999]: 2026-01-31 04:20:01.246905005 +0000 UTC m=+0.148669169 container init 755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde (image=quay.io/ceph/ceph:v20, name=silly_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:20:01 np0005603435 podman[96999]: 2026-01-31 04:20:01.250791599 +0000 UTC m=+0.152555703 container start 755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde (image=quay.io/ceph/ceph:v20, name=silly_rosalind, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:20:01 np0005603435 podman[96999]: 2026-01-31 04:20:01.256136173 +0000 UTC m=+0.157900277 container attach 755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde (image=quay.io/ceph/ceph:v20, name=silly_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 30 23:20:01 np0005603435 podman[97040]: 2026-01-31 04:20:01.320770949 +0000 UTC m=+0.040293155 container create ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:20:01 np0005603435 systemd[1]: Started libpod-conmon-ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8.scope.
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 30 23:20:01 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 38 pg[9.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfbff9ad32df0a985185141e4e1a4a0990497f9ff168ea91aeb3d351ab0a6a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfbff9ad32df0a985185141e4e1a4a0990497f9ff168ea91aeb3d351ab0a6a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfbff9ad32df0a985185141e4e1a4a0990497f9ff168ea91aeb3d351ab0a6a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfbff9ad32df0a985185141e4e1a4a0990497f9ff168ea91aeb3d351ab0a6a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfbff9ad32df0a985185141e4e1a4a0990497f9ff168ea91aeb3d351ab0a6a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:01 np0005603435 podman[97040]: 2026-01-31 04:20:01.300400343 +0000 UTC m=+0.019922549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 30 23:20:01 np0005603435 podman[97040]: 2026-01-31 04:20:01.41918837 +0000 UTC m=+0.138710596 container init ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:01 np0005603435 podman[97040]: 2026-01-31 04:20:01.425514555 +0000 UTC m=+0.145036781 container start ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclean, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:20:01 np0005603435 podman[97040]: 2026-01-31 04:20:01.430201016 +0000 UTC m=+0.149723242 container attach ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:20:01 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} v 0)
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} : dispatch
Jan 30 23:20:01 np0005603435 silly_rosalind[97029]: 
Jan 30 23:20:01 np0005603435 silly_rosalind[97029]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 30 23:20:01 np0005603435 systemd[1]: libpod-755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde.scope: Deactivated successfully.
Jan 30 23:20:01 np0005603435 podman[96999]: 2026-01-31 04:20:01.760190762 +0000 UTC m=+0.661954866 container died 755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde (image=quay.io/ceph/ceph:v20, name=silly_rosalind, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 30 23:20:01 np0005603435 systemd[1]: var-lib-containers-storage-overlay-82b110febc222292fb21880fd8181d67d39f9c62bc392e2441363790136e7e84-merged.mount: Deactivated successfully.
Jan 30 23:20:01 np0005603435 podman[96999]: 2026-01-31 04:20:01.797457591 +0000 UTC m=+0.699221655 container remove 755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde (image=quay.io/ceph/ceph:v20, name=silly_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:01 np0005603435 systemd[1]: libpod-conmon-755bb7a1f0ed9db5dfee3c76ee204eaaeb138530f741433b318f654a044abdde.scope: Deactivated successfully.
Jan 30 23:20:01 np0005603435 serene_mclean[97075]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:20:01 np0005603435 serene_mclean[97075]: --> All data devices are unavailable
Jan 30 23:20:01 np0005603435 systemd[1]: libpod-ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8.scope: Deactivated successfully.
Jan 30 23:20:01 np0005603435 podman[97040]: 2026-01-31 04:20:01.855554117 +0000 UTC m=+0.575076343 container died ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclean, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:20:01 np0005603435 podman[97040]: 2026-01-31 04:20:01.901999263 +0000 UTC m=+0.621521479 container remove ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mclean, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:01 np0005603435 systemd[1]: libpod-conmon-ab1bc05b3f2f971c3862ec9d381a2dc0190de9bbf2269f19529c299e659c00f8.scope: Deactivated successfully.
Jan 30 23:20:01 np0005603435 ceph-mgr[75599]: [progress INFO root] Writing back 5 completed events
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 30 23:20:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:02 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ccfbff9ad32df0a985185141e4e1a4a0990497f9ff168ea91aeb3d351ab0a6a8-merged.mount: Deactivated successfully.
Jan 30 23:20:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v87: 9 pgs: 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 30 23:20:02 np0005603435 podman[97184]: 2026-01-31 04:20:02.342960639 +0000 UTC m=+0.041723056 container create 837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 30 23:20:02 np0005603435 systemd[1]: Started libpod-conmon-837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d.scope.
Jan 30 23:20:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 30 23:20:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 30 23:20:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 30 23:20:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 30 23:20:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 30 23:20:02 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:02 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 30 23:20:02 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:02 np0005603435 podman[97184]: 2026-01-31 04:20:02.32248658 +0000 UTC m=+0.021249027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:02 np0005603435 podman[97184]: 2026-01-31 04:20:02.429106916 +0000 UTC m=+0.127869373 container init 837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:20:02 np0005603435 podman[97184]: 2026-01-31 04:20:02.434145284 +0000 UTC m=+0.132907731 container start 837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 30 23:20:02 np0005603435 podman[97184]: 2026-01-31 04:20:02.437857884 +0000 UTC m=+0.136620321 container attach 837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:20:02 np0005603435 jolly_gates[97200]: 167 167
Jan 30 23:20:02 np0005603435 systemd[1]: libpod-837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d.scope: Deactivated successfully.
Jan 30 23:20:02 np0005603435 podman[97184]: 2026-01-31 04:20:02.4400145 +0000 UTC m=+0.138776947 container died 837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 30 23:20:02 np0005603435 ansible-async_wrapper.py[95531]: Done in kid B.
Jan 30 23:20:02 np0005603435 systemd[1]: var-lib-containers-storage-overlay-85a8721920c74ebf0030e560d5d40bdca08e15c7b638828fb61ce8caba0b0f0b-merged.mount: Deactivated successfully.
Jan 30 23:20:02 np0005603435 podman[97184]: 2026-01-31 04:20:02.481141652 +0000 UTC m=+0.179904079 container remove 837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:20:02 np0005603435 systemd[1]: libpod-conmon-837fe9678ea225d3078cee4542711f0ba60282441df344b82823150c8310060d.scope: Deactivated successfully.
Jan 30 23:20:02 np0005603435 podman[97224]: 2026-01-31 04:20:02.643070634 +0000 UTC m=+0.058226770 container create 9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:20:02 np0005603435 systemd[1]: Started libpod-conmon-9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778.scope.
Jan 30 23:20:02 np0005603435 podman[97224]: 2026-01-31 04:20:02.612681792 +0000 UTC m=+0.027837878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:02 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:02 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b8e73055a0f4a1682e041e4cdfab24c74a9c42801c833b3ae76bdbd7af54a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:02 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b8e73055a0f4a1682e041e4cdfab24c74a9c42801c833b3ae76bdbd7af54a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:02 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b8e73055a0f4a1682e041e4cdfab24c74a9c42801c833b3ae76bdbd7af54a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:02 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b8e73055a0f4a1682e041e4cdfab24c74a9c42801c833b3ae76bdbd7af54a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:02 np0005603435 podman[97224]: 2026-01-31 04:20:02.737698153 +0000 UTC m=+0.152854189 container init 9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:02 np0005603435 podman[97224]: 2026-01-31 04:20:02.747300689 +0000 UTC m=+0.162456705 container start 9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:20:02 np0005603435 podman[97224]: 2026-01-31 04:20:02.750293913 +0000 UTC m=+0.165449929 container attach 9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]: {
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:    "0": [
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:        {
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "devices": [
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "/dev/loop3"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            ],
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_name": "ceph_lv0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_size": "21470642176",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "name": "ceph_lv0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "tags": {
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cluster_name": "ceph",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.crush_device_class": "",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.encrypted": "0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.objectstore": "bluestore",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osd_id": "0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.type": "block",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.vdo": "0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.with_tpm": "0"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            },
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "type": "block",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "vg_name": "ceph_vg0"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:        }
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:    ],
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:    "1": [
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:        {
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "devices": [
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "/dev/loop4"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            ],
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_name": "ceph_lv1",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_size": "21470642176",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "name": "ceph_lv1",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "tags": {
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cluster_name": "ceph",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.crush_device_class": "",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.encrypted": "0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.objectstore": "bluestore",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osd_id": "1",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.type": "block",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.vdo": "0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.with_tpm": "0"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            },
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "type": "block",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "vg_name": "ceph_vg1"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:        }
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:    ],
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:    "2": [
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:        {
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "devices": [
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "/dev/loop5"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            ],
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_name": "ceph_lv2",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_size": "21470642176",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "name": "ceph_lv2",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "tags": {
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.cluster_name": "ceph",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.crush_device_class": "",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.encrypted": "0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.objectstore": "bluestore",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osd_id": "2",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.type": "block",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.vdo": "0",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:                "ceph.with_tpm": "0"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            },
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "type": "block",
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:            "vg_name": "ceph_vg2"
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:        }
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]:    ]
Jan 30 23:20:03 np0005603435 hopeful_engelbart[97240]: }
Jan 30 23:20:03 np0005603435 systemd[1]: libpod-9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778.scope: Deactivated successfully.
Jan 30 23:20:03 np0005603435 podman[97224]: 2026-01-31 04:20:03.032109176 +0000 UTC m=+0.447265182 container died 9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:20:03 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e6b8e73055a0f4a1682e041e4cdfab24c74a9c42801c833b3ae76bdbd7af54a0-merged.mount: Deactivated successfully.
Jan 30 23:20:03 np0005603435 podman[97224]: 2026-01-31 04:20:03.075479296 +0000 UTC m=+0.490635312 container remove 9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 30 23:20:03 np0005603435 systemd[1]: libpod-conmon-9d1da1dd1e3885a8e78d5990c21c400bb8527f31fc5c9b70fa00f9f6ddbe1778.scope: Deactivated successfully.
Jan 30 23:20:03 np0005603435 python3[97274]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:20:03 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 39 pg[10.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:03 np0005603435 podman[97309]: 2026-01-31 04:20:03.211324139 +0000 UTC m=+0.057270659 container create 4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c (image=quay.io/ceph/ceph:v20, name=practical_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:20:03 np0005603435 systemd[1]: Started libpod-conmon-4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c.scope.
Jan 30 23:20:03 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc980548e7093bfc1dff1ce8c6f7199a2cab6a0bd07491be38d154943557fdd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc980548e7093bfc1dff1ce8c6f7199a2cab6a0bd07491be38d154943557fdd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:03 np0005603435 podman[97309]: 2026-01-31 04:20:03.177257359 +0000 UTC m=+0.023203909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:20:03 np0005603435 podman[97309]: 2026-01-31 04:20:03.291958748 +0000 UTC m=+0.137905318 container init 4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c (image=quay.io/ceph/ceph:v20, name=practical_robinson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:20:03 np0005603435 podman[97309]: 2026-01-31 04:20:03.301058974 +0000 UTC m=+0.147005534 container start 4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c (image=quay.io/ceph/ceph:v20, name=practical_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:20:03 np0005603435 podman[97309]: 2026-01-31 04:20:03.309441243 +0000 UTC m=+0.155387773 container attach 4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c (image=quay.io/ceph/ceph:v20, name=practical_robinson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:20:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 30 23:20:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 30 23:20:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 30 23:20:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 30 23:20:03 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 40 pg[10.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:03 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 30 23:20:03 np0005603435 podman[97386]: 2026-01-31 04:20:03.483732401 +0000 UTC m=+0.054671204 container create 32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:20:03 np0005603435 systemd[1]: Started libpod-conmon-32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3.scope.
Jan 30 23:20:03 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:03 np0005603435 podman[97386]: 2026-01-31 04:20:03.539380434 +0000 UTC m=+0.110319267 container init 32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 30 23:20:03 np0005603435 podman[97386]: 2026-01-31 04:20:03.543946202 +0000 UTC m=+0.114885005 container start 32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shtern, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:20:03 np0005603435 awesome_shtern[97404]: 167 167
Jan 30 23:20:03 np0005603435 systemd[1]: libpod-32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3.scope: Deactivated successfully.
Jan 30 23:20:03 np0005603435 conmon[97404]: conmon 32ecf26157fc8eb41094 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3.scope/container/memory.events
Jan 30 23:20:03 np0005603435 podman[97386]: 2026-01-31 04:20:03.547795494 +0000 UTC m=+0.118734307 container attach 32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:20:03 np0005603435 podman[97386]: 2026-01-31 04:20:03.54807639 +0000 UTC m=+0.119015183 container died 32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:20:03 np0005603435 podman[97386]: 2026-01-31 04:20:03.463850704 +0000 UTC m=+0.034789507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:03 np0005603435 systemd[1]: var-lib-containers-storage-overlay-6f8433c726abe0ae8a6ece7bf679027b2da0bb32efc8026889a24c8856757e37-merged.mount: Deactivated successfully.
Jan 30 23:20:03 np0005603435 podman[97386]: 2026-01-31 04:20:03.588776063 +0000 UTC m=+0.159714826 container remove 32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shtern, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:20:03 np0005603435 systemd[1]: libpod-conmon-32ecf26157fc8eb41094d4ed21fc34b650b8f316c33cf0c9d5d87604749ef6d3.scope: Deactivated successfully.
Jan 30 23:20:03 np0005603435 podman[97429]: 2026-01-31 04:20:03.695627703 +0000 UTC m=+0.038781942 container create 54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:20:03 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 30 23:20:03 np0005603435 practical_robinson[97352]: 
Jan 30 23:20:03 np0005603435 practical_robinson[97352]: [{"container_id": "7b5bf0c3978a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.21%", "created": "2026-01-31T04:18:28.841435Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T04:18:28.902642Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T04:20:00.549294Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-31T04:18:28.721414Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@crash.compute-0", "version": "20.2.0"}, {"container_id": "0802fb2aceef", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "9.31%", "created": "2026-01-31T04:19:59.117478Z", "daemon_id": "cephfs.compute-0.xaqauc", "daemon_name": "mds.cephfs.compute-0.xaqauc", "daemon_type": "mds", "events": ["2026-01-31T04:19:59.183658Z daemon:mds.cephfs.compute-0.xaqauc [INFO] \"Deployed mds.cephfs.compute-0.xaqauc on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T04:20:00.550348Z", "memory_usage": 15466496, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-31T04:19:59.011260Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@mds.cephfs.compute-0.xaqauc", "version": "20.2.0"}, {"container_id": "2145ef748f54", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "15.17%", "created": "2026-01-31T04:17:50.170501Z", "daemon_id": "compute-0.wyngmr", "daemon_name": "mgr.compute-0.wyngmr", "daemon_type": "mgr", "events": ["2026-01-31T04:18:33.616349Z daemon:mgr.compute-0.wyngmr [INFO] \"Reconfigured mgr.compute-0.wyngmr on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T04:20:00.549107Z", "memory_usage": 547776102, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-31T04:17:50.055967Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@mgr.compute-0.wyngmr", "version": "20.2.0"}, {"container_id": "01d4b3ad3ca9", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.87%", "created": "2026-01-31T04:17:45.515387Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T04:18:33.003912Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T04:20:00.548908Z", "memory_request": 2147483648, "memory_usage": 40894464, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-31T04:17:48.250194Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@mon.compute-0", "version": "20.2.0"}, {"container_id": "e15c14069482", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.63%", "created": "2026-01-31T04:18:53.197007Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-31T04:18:53.274668Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T04:20:00.549489Z", "memory_request": 4294967296, "memory_usage": 57787023, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T04:18:53.061384Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@osd.0", "version": "20.2.0"}, {"container_id": "de3d845254e3", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.88%", "created": "2026-01-31T04:18:58.010120Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-31T04:18:58.702025Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T04:20:00.549672Z", "memory_request": 4294967296, "memory_usage": 57912852, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T04:18:57.740804Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@osd.1", "version": "20.2.0"}, {"container_id": "40bfddc06ce8", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.71%", "created": "2026-01-31T04:19:03.478249Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-31T04:19:03.738909Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T04:20:00.549878Z", "memory_request": 4294967296, "memory_usage": 57514393, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T04:19:03.264327Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed@osd.2", "version": "20.2.0"}, {"container_id": "0162f99c3110", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Jan 30 23:20:03 np0005603435 systemd[1]: Started libpod-conmon-54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6.scope.
Jan 30 23:20:03 np0005603435 podman[97309]: 2026-01-31 04:20:03.734490027 +0000 UTC m=+0.580436547 container died 4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c (image=quay.io/ceph/ceph:v20, name=practical_robinson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:20:03 np0005603435 systemd[1]: libpod-4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c.scope: Deactivated successfully.
Jan 30 23:20:03 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d056ab1ed01fa6394e2c8acb9eb67bcfc3f49c4dd6bb7493851698688defad70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d056ab1ed01fa6394e2c8acb9eb67bcfc3f49c4dd6bb7493851698688defad70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d056ab1ed01fa6394e2c8acb9eb67bcfc3f49c4dd6bb7493851698688defad70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d056ab1ed01fa6394e2c8acb9eb67bcfc3f49c4dd6bb7493851698688defad70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:03 np0005603435 podman[97429]: 2026-01-31 04:20:03.679553809 +0000 UTC m=+0.022708068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:03 np0005603435 rsyslogd[1007]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "7b5bf0c3978a", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 30 23:20:03 np0005603435 podman[97429]: 2026-01-31 04:20:03.813497971 +0000 UTC m=+0.156652220 container init 54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_perlman, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:20:03 np0005603435 podman[97429]: 2026-01-31 04:20:03.818927277 +0000 UTC m=+0.162081516 container start 54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:20:03 np0005603435 podman[97309]: 2026-01-31 04:20:03.823279751 +0000 UTC m=+0.669226281 container remove 4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c (image=quay.io/ceph/ceph:v20, name=practical_robinson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:20:03 np0005603435 systemd[1]: libpod-conmon-4f49bac75ea21b4cc79f7fb5a08863b0df62d49092b3e508f2604418c427c93c.scope: Deactivated successfully.
Jan 30 23:20:03 np0005603435 podman[97429]: 2026-01-31 04:20:03.834037511 +0000 UTC m=+0.177191770 container attach 54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:04 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8cc980548e7093bfc1dff1ce8c6f7199a2cab6a0bd07491be38d154943557fdd-merged.mount: Deactivated successfully.
Jan 30 23:20:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v90: 10 pgs: 1 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 30 23:20:04 np0005603435 ceph-mds[95922]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 30 23:20:04 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mds-cephfs-compute-0-xaqauc[95918]: 2026-01-31T04:20:04.412+0000 7fa0e241b640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 30 23:20:04 np0005603435 lvm[97536]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:20:04 np0005603435 lvm[97536]: VG ceph_vg0 finished
Jan 30 23:20:04 np0005603435 lvm[97537]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:20:04 np0005603435 lvm[97537]: VG ceph_vg1 finished
Jan 30 23:20:04 np0005603435 lvm[97539]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:20:04 np0005603435 lvm[97539]: VG ceph_vg2 finished
Jan 30 23:20:04 np0005603435 lvm[97540]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:20:04 np0005603435 lvm[97540]: VG ceph_vg0 finished
Jan 30 23:20:04 np0005603435 exciting_perlman[97447]: {}
Jan 30 23:20:04 np0005603435 systemd[1]: libpod-54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6.scope: Deactivated successfully.
Jan 30 23:20:04 np0005603435 systemd[1]: libpod-54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6.scope: Consumed 1.086s CPU time.
Jan 30 23:20:04 np0005603435 podman[97429]: 2026-01-31 04:20:04.577666657 +0000 UTC m=+0.920820896 container died 54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_perlman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:20:04 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d056ab1ed01fa6394e2c8acb9eb67bcfc3f49c4dd6bb7493851698688defad70-merged.mount: Deactivated successfully.
Jan 30 23:20:04 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 41 pg[11.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:04 np0005603435 podman[97429]: 2026-01-31 04:20:04.632142316 +0000 UTC m=+0.975296595 container remove 54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_perlman, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:04 np0005603435 systemd[1]: libpod-conmon-54a340d3f962995e51c3e50224d60045d4934e925e5b6d6d7dc2868513205bd6.scope: Deactivated successfully.
Jan 30 23:20:04 np0005603435 python3[97568]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:04 np0005603435 podman[97581]: 2026-01-31 04:20:04.765557176 +0000 UTC m=+0.053722753 container create 5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c (image=quay.io/ceph/ceph:v20, name=affectionate_newton, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:20:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:04 np0005603435 systemd[1]: Started libpod-conmon-5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c.scope.
Jan 30 23:20:04 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341774048a3075e7fbfacbad0f1b11933a7ef380fd64a822fa12e682e5b132df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341774048a3075e7fbfacbad0f1b11933a7ef380fd64a822fa12e682e5b132df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:04 np0005603435 podman[97581]: 2026-01-31 04:20:04.748833148 +0000 UTC m=+0.036998735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:20:04 np0005603435 podman[97581]: 2026-01-31 04:20:04.845869789 +0000 UTC m=+0.134035446 container init 5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c (image=quay.io/ceph/ceph:v20, name=affectionate_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 30 23:20:04 np0005603435 podman[97581]: 2026-01-31 04:20:04.850958388 +0000 UTC m=+0.139123965 container start 5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c (image=quay.io/ceph/ceph:v20, name=affectionate_newton, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:20:04 np0005603435 podman[97581]: 2026-01-31 04:20:04.856474626 +0000 UTC m=+0.144640223 container attach 5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c (image=quay.io/ceph/ceph:v20, name=affectionate_newton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 30 23:20:05 np0005603435 podman[97740]: 2026-01-31 04:20:05.32768026 +0000 UTC m=+0.067948998 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2839607958' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 30 23:20:05 np0005603435 affectionate_newton[97622]: 
Jan 30 23:20:05 np0005603435 affectionate_newton[97622]: {"fsid":"95d2f419-0dd0-56f2-a094-353f8c7597ed","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":136,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1769833159,"num_in_osds":3,"osd_in_since":1769833124,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":9},{"state_name":"unknown","count":1}],"num_pgs":10,"num_pools":10,"num_objects":30,"data_bytes":463390,"bytes_used":84107264,"bytes_avail":64327819264,"bytes_total":64411926528,"unknown_pgs_ratio":0.10000000149011612,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":5,"btime":"2026-01-31T04:20:00:404475+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.xaqauc","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-31T04:20:02.276810+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.xaqauc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 30 23:20:05 np0005603435 systemd[1]: libpod-5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c.scope: Deactivated successfully.
Jan 30 23:20:05 np0005603435 podman[97581]: 2026-01-31 04:20:05.380309829 +0000 UTC m=+0.668475426 container died 5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c (image=quay.io/ceph/ceph:v20, name=affectionate_newton, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 30 23:20:05 np0005603435 systemd[1]: var-lib-containers-storage-overlay-341774048a3075e7fbfacbad0f1b11933a7ef380fd64a822fa12e682e5b132df-merged.mount: Deactivated successfully.
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:05 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 42 pg[11.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 30 23:20:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 30 23:20:05 np0005603435 podman[97581]: 2026-01-31 04:20:05.442672086 +0000 UTC m=+0.730837693 container remove 5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c (image=quay.io/ceph/ceph:v20, name=affectionate_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:20:05 np0005603435 podman[97740]: 2026-01-31 04:20:05.456676986 +0000 UTC m=+0.196945704 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:20:05 np0005603435 systemd[1]: libpod-conmon-5644c1fd9a3691409016a8cea61de3d45c791507e9b3d8c64bfbfc4b1613987c.scope: Deactivated successfully.
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:20:06
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Some PGs (0.090909) are unknown; try again later
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:20:06 np0005603435 python3[97970]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:20:06 np0005603435 podman[98014]: 2026-01-31 04:20:06.379895483 +0000 UTC m=+0.049857820 container create 5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b (image=quay.io/ceph/ceph:v20, name=zen_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:20:06 np0005603435 systemd[1]: Started libpod-conmon-5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b.scope.
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:20:06 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:06 np0005603435 podman[98014]: 2026-01-31 04:20:06.363468951 +0000 UTC m=+0.033431318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:20:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c8aae549fd2b5238893aa21ac02e4379dda976dc4778d51f37ec960023396d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c8aae549fd2b5238893aa21ac02e4379dda976dc4778d51f37ec960023396d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:06 np0005603435 podman[98014]: 2026-01-31 04:20:06.480600303 +0000 UTC m=+0.150562700 container init 5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b (image=quay.io/ceph/ceph:v20, name=zen_wilson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:20:06 np0005603435 podman[98014]: 2026-01-31 04:20:06.488731677 +0000 UTC m=+0.158694034 container start 5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b (image=quay.io/ceph/ceph:v20, name=zen_wilson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:20:06 np0005603435 podman[98014]: 2026-01-31 04:20:06.497930475 +0000 UTC m=+0.167892872 container attach 5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b (image=quay.io/ceph/ceph:v20, name=zen_wilson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:20:06 np0005603435 podman[98047]: 2026-01-31 04:20:06.592284418 +0000 UTC m=+0.034750826 container create 3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_blackburn, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:20:06 np0005603435 systemd[1]: Started libpod-conmon-3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721.scope.
Jan 30 23:20:06 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:06 np0005603435 podman[98047]: 2026-01-31 04:20:06.656701339 +0000 UTC m=+0.099167767 container init 3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:20:06 np0005603435 podman[98047]: 2026-01-31 04:20:06.66093053 +0000 UTC m=+0.103396938 container start 3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_blackburn, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:06 np0005603435 upbeat_blackburn[98100]: 167 167
Jan 30 23:20:06 np0005603435 systemd[1]: libpod-3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721.scope: Deactivated successfully.
Jan 30 23:20:06 np0005603435 conmon[98100]: conmon 3f935150936457649550 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721.scope/container/memory.events
Jan 30 23:20:06 np0005603435 podman[98047]: 2026-01-31 04:20:06.668202846 +0000 UTC m=+0.110669284 container attach 3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_blackburn, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 30 23:20:06 np0005603435 podman[98047]: 2026-01-31 04:20:06.668587614 +0000 UTC m=+0.111054022 container died 3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:20:06 np0005603435 podman[98047]: 2026-01-31 04:20:06.577237545 +0000 UTC m=+0.019703983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:06 np0005603435 systemd[1]: var-lib-containers-storage-overlay-873fc00d8c0c1a2aeb3704856ef988db04a71bf10209224df385fcbae34f51d8-merged.mount: Deactivated successfully.
Jan 30 23:20:06 np0005603435 podman[98047]: 2026-01-31 04:20:06.711242229 +0000 UTC m=+0.153708637 container remove 3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:20:06 np0005603435 systemd[1]: libpod-conmon-3f935150936457649550f6a1ae124ba36eaf3b3f385de10693c64406759ba721.scope: Deactivated successfully.
Jan 30 23:20:06 np0005603435 radosgw[95468]: v1 topic migration: starting v1 topic migration..
Jan 30 23:20:06 np0005603435 radosgw[95468]: v1 topic migration: finished v1 topic migration
Jan 30 23:20:06 np0005603435 radosgw[95468]: framework: beast
Jan 30 23:20:06 np0005603435 radosgw[95468]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 30 23:20:06 np0005603435 radosgw[95468]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 30 23:20:06 np0005603435 radosgw[95468]: starting handler: beast
Jan 30 23:20:06 np0005603435 radosgw[95468]: set uid:gid to 167:167 (ceph:ceph)
Jan 30 23:20:06 np0005603435 radosgw[95468]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.zvcgqa,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=68a9d704-30fd-46d0-bac7-c1515a6c72e4,zone_name=default,zonegroup_id=7e4da2fe-c1eb-4022-84ab-9d06b83b378c,zonegroup_name=default}
Jan 30 23:20:06 np0005603435 podman[98140]: 2026-01-31 04:20:06.849893372 +0000 UTC m=+0.042273618 container create 33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_torvalds, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:20:06 np0005603435 systemd[1]: Started libpod-conmon-33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87.scope.
Jan 30 23:20:06 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184f03f61f6bbc4accdacaf5574f7cae8b1c2c54ef7497570055fb8b01c9c8dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184f03f61f6bbc4accdacaf5574f7cae8b1c2c54ef7497570055fb8b01c9c8dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184f03f61f6bbc4accdacaf5574f7cae8b1c2c54ef7497570055fb8b01c9c8dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184f03f61f6bbc4accdacaf5574f7cae8b1c2c54ef7497570055fb8b01c9c8dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/184f03f61f6bbc4accdacaf5574f7cae8b1c2c54ef7497570055fb8b01c9c8dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168133626' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:06 np0005603435 zen_wilson[98029]: 
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.94747113029558e-07 of space, bias 4.0, pg target 0.0008336965356354696 quantized to 16 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:20:06 np0005603435 podman[98140]: 2026-01-31 04:20:06.922275184 +0000 UTC m=+0.114655430 container init 33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:20:06 np0005603435 zen_wilson[98029]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.zvcgqa","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 30 23:20:06 np0005603435 systemd[1]: libpod-5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b.scope: Deactivated successfully.
Jan 30 23:20:06 np0005603435 podman[98140]: 2026-01-31 04:20:06.929768475 +0000 UTC m=+0.122148721 container start 33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 30 23:20:06 np0005603435 podman[98140]: 2026-01-31 04:20:06.835288199 +0000 UTC m=+0.027668465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:20:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:20:06 np0005603435 podman[98140]: 2026-01-31 04:20:06.936816526 +0000 UTC m=+0.129196792 container attach 33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:20:06 np0005603435 podman[98165]: 2026-01-31 04:20:06.975883533 +0000 UTC m=+0.031741861 container died 5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b (image=quay.io/ceph/ceph:v20, name=zen_wilson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:07 np0005603435 systemd[1]: var-lib-containers-storage-overlay-80c8aae549fd2b5238893aa21ac02e4379dda976dc4778d51f37ec960023396d-merged.mount: Deactivated successfully.
Jan 30 23:20:07 np0005603435 podman[98165]: 2026-01-31 04:20:07.035679276 +0000 UTC m=+0.091537554 container remove 5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b (image=quay.io/ceph/ceph:v20, name=zen_wilson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:20:07 np0005603435 systemd[1]: libpod-conmon-5338fb459fc7c2b91c1c6a437f8cbde3313d3104748bbf8d84d8601c15bd9d9b.scope: Deactivated successfully.
Jan 30 23:20:07 np0005603435 ecstatic_torvalds[98158]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:20:07 np0005603435 ecstatic_torvalds[98158]: --> All data devices are unavailable
Jan 30 23:20:07 np0005603435 systemd[1]: libpod-33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87.scope: Deactivated successfully.
Jan 30 23:20:07 np0005603435 conmon[98158]: conmon 33b096245af09be72c38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87.scope/container/memory.events
Jan 30 23:20:07 np0005603435 podman[98140]: 2026-01-31 04:20:07.362972093 +0000 UTC m=+0.555352349 container died 33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:20:07 np0005603435 systemd[1]: var-lib-containers-storage-overlay-184f03f61f6bbc4accdacaf5574f7cae8b1c2c54ef7497570055fb8b01c9c8dc-merged.mount: Deactivated successfully.
Jan 30 23:20:07 np0005603435 podman[98140]: 2026-01-31 04:20:07.416567642 +0000 UTC m=+0.608947908 container remove 33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:20:07 np0005603435 systemd[1]: libpod-conmon-33b096245af09be72c38a47a0c2d88729aa185edcb3270a845ac6f24a6b6eb87.scope: Deactivated successfully.
Jan 30 23:20:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 30 23:20:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 30 23:20:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 30 23:20:07 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 66177c99-2b89-41d1-850f-e0123af5295e (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 30 23:20:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:07 np0005603435 ceph-mon[75307]: from='client.? 192.168.122.100:0/3948454279' entity='client.rgw.rgw.compute-0.zvcgqa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 30 23:20:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:07 np0005603435 podman[98271]: 2026-01-31 04:20:07.79794297 +0000 UTC m=+0.041916420 container create 4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:07 np0005603435 systemd[1]: Started libpod-conmon-4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f.scope.
Jan 30 23:20:07 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:07 np0005603435 podman[98271]: 2026-01-31 04:20:07.856583968 +0000 UTC m=+0.100557438 container init 4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:20:07 np0005603435 podman[98271]: 2026-01-31 04:20:07.862149697 +0000 UTC m=+0.106123147 container start 4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:20:07 np0005603435 zealous_thompson[98313]: 167 167
Jan 30 23:20:07 np0005603435 systemd[1]: libpod-4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f.scope: Deactivated successfully.
Jan 30 23:20:07 np0005603435 podman[98271]: 2026-01-31 04:20:07.866841138 +0000 UTC m=+0.110814598 container attach 4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:20:07 np0005603435 podman[98271]: 2026-01-31 04:20:07.86785979 +0000 UTC m=+0.111833240 container died 4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:20:07 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:20:07 np0005603435 podman[98271]: 2026-01-31 04:20:07.775920758 +0000 UTC m=+0.019894228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:07 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5d9239abc6a30f9a820595978ae70cc7e322d1db629d00377fd19ef28aec9823-merged.mount: Deactivated successfully.
Jan 30 23:20:07 np0005603435 python3[98310]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:20:08 np0005603435 podman[98271]: 2026-01-31 04:20:08.026261456 +0000 UTC m=+0.270234936 container remove 4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_thompson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:20:08 np0005603435 systemd[1]: libpod-conmon-4254c7e55a999b6cd48ff3e982b17e51dd535f4ecbfcba74f37dd45bc0c1309f.scope: Deactivated successfully.
Jan 30 23:20:08 np0005603435 podman[98331]: 2026-01-31 04:20:08.07307843 +0000 UTC m=+0.128338593 container create 53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0 (image=quay.io/ceph/ceph:v20, name=suspicious_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 30 23:20:08 np0005603435 systemd[1]: Started libpod-conmon-53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0.scope.
Jan 30 23:20:08 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:08 np0005603435 podman[98331]: 2026-01-31 04:20:08.045357676 +0000 UTC m=+0.100617919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:20:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0d77af1f31b06eb4b7a1efd75b35a5d73597b49bedea6c691c29682661f7d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0d77af1f31b06eb4b7a1efd75b35a5d73597b49bedea6c691c29682661f7d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:08 np0005603435 podman[98331]: 2026-01-31 04:20:08.157044271 +0000 UTC m=+0.212304454 container init 53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0 (image=quay.io/ceph/ceph:v20, name=suspicious_johnson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:20:08 np0005603435 podman[98331]: 2026-01-31 04:20:08.162120209 +0000 UTC m=+0.217380402 container start 53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0 (image=quay.io/ceph/ceph:v20, name=suspicious_johnson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:20:08 np0005603435 podman[98331]: 2026-01-31 04:20:08.178629904 +0000 UTC m=+0.233890097 container attach 53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0 (image=quay.io/ceph/ceph:v20, name=suspicious_johnson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:20:08 np0005603435 podman[98357]: 2026-01-31 04:20:08.193504343 +0000 UTC m=+0.051005455 container create 5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_raman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 30 23:20:08 np0005603435 systemd[1]: Started libpod-conmon-5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c.scope.
Jan 30 23:20:08 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96883ff195a04c51c250549db06b791d3f7efad184f52a0b5720b35cd1b58ed7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96883ff195a04c51c250549db06b791d3f7efad184f52a0b5720b35cd1b58ed7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96883ff195a04c51c250549db06b791d3f7efad184f52a0b5720b35cd1b58ed7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96883ff195a04c51c250549db06b791d3f7efad184f52a0b5720b35cd1b58ed7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:08 np0005603435 podman[98357]: 2026-01-31 04:20:08.165694966 +0000 UTC m=+0.023196128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:08 np0005603435 podman[98357]: 2026-01-31 04:20:08.273673562 +0000 UTC m=+0.131174674 container init 5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:20:08 np0005603435 podman[98357]: 2026-01-31 04:20:08.278851913 +0000 UTC m=+0.136353025 container start 5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_raman, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 30 23:20:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v96: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 16 KiB/s wr, 463 op/s
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:08 np0005603435 podman[98357]: 2026-01-31 04:20:08.283470282 +0000 UTC m=+0.140971464 container attach 5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_raman, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 30 23:20:08 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 47e60d41-e1db-43e1-89b6-9fc5dd1be350 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:08 np0005603435 condescending_raman[98375]: {
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:    "0": [
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:        {
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "devices": [
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "/dev/loop3"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            ],
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_name": "ceph_lv0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_size": "21470642176",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "name": "ceph_lv0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "tags": {
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cluster_name": "ceph",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.crush_device_class": "",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.encrypted": "0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.objectstore": "bluestore",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osd_id": "0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.type": "block",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.vdo": "0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.with_tpm": "0"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            },
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "type": "block",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "vg_name": "ceph_vg0"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:        }
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:    ],
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:    "1": [
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:        {
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "devices": [
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "/dev/loop4"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            ],
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_name": "ceph_lv1",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_size": "21470642176",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "name": "ceph_lv1",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "tags": {
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cluster_name": "ceph",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.crush_device_class": "",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.encrypted": "0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.objectstore": "bluestore",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osd_id": "1",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.type": "block",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.vdo": "0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.with_tpm": "0"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            },
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "type": "block",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "vg_name": "ceph_vg1"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:        }
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:    ],
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:    "2": [
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:        {
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "devices": [
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "/dev/loop5"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            ],
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_name": "ceph_lv2",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_size": "21470642176",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "name": "ceph_lv2",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "tags": {
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.cluster_name": "ceph",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.crush_device_class": "",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.encrypted": "0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.objectstore": "bluestore",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osd_id": "2",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.type": "block",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.vdo": "0",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:                "ceph.with_tpm": "0"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            },
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "type": "block",
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:            "vg_name": "ceph_vg2"
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:        }
Jan 30 23:20:08 np0005603435 condescending_raman[98375]:    ]
Jan 30 23:20:08 np0005603435 condescending_raman[98375]: }
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 30 23:20:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143579062' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 30 23:20:08 np0005603435 suspicious_johnson[98349]: mimic
Jan 30 23:20:08 np0005603435 systemd[1]: libpod-53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0.scope: Deactivated successfully.
Jan 30 23:20:08 np0005603435 podman[98331]: 2026-01-31 04:20:08.555035295 +0000 UTC m=+0.610295458 container died 53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0 (image=quay.io/ceph/ceph:v20, name=suspicious_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:20:08 np0005603435 systemd[1]: libpod-5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c.scope: Deactivated successfully.
Jan 30 23:20:08 np0005603435 podman[98357]: 2026-01-31 04:20:08.571717423 +0000 UTC m=+0.429218575 container died 5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:20:08 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3a0d77af1f31b06eb4b7a1efd75b35a5d73597b49bedea6c691c29682661f7d6-merged.mount: Deactivated successfully.
Jan 30 23:20:08 np0005603435 podman[98331]: 2026-01-31 04:20:08.611891334 +0000 UTC m=+0.667151507 container remove 53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0 (image=quay.io/ceph/ceph:v20, name=suspicious_johnson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:20:08 np0005603435 podman[98357]: 2026-01-31 04:20:08.636102833 +0000 UTC m=+0.493603945 container remove 5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:20:08 np0005603435 systemd[1]: libpod-conmon-5f8f4b4e621acc50b1f57591795b7dae6458db14d4e4391623cbb9a124b34a1c.scope: Deactivated successfully.
Jan 30 23:20:08 np0005603435 systemd[1]: libpod-conmon-53e544da9863f04a76f2a8e38d2169b7f616ccd02f5c90ecb1e353ae1be32db0.scope: Deactivated successfully.
Jan 30 23:20:08 np0005603435 systemd[1]: var-lib-containers-storage-overlay-96883ff195a04c51c250549db06b791d3f7efad184f52a0b5720b35cd1b58ed7-merged.mount: Deactivated successfully.
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:09 np0005603435 podman[98490]: 2026-01-31 04:20:09.100679195 +0000 UTC m=+0.061139722 container create 05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_sinoussi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:09 np0005603435 systemd[1]: Started libpod-conmon-05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e.scope.
Jan 30 23:20:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:09 np0005603435 podman[98490]: 2026-01-31 04:20:09.066028492 +0000 UTC m=+0.026489089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:09 np0005603435 podman[98490]: 2026-01-31 04:20:09.172788262 +0000 UTC m=+0.133248799 container init 05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:20:09 np0005603435 podman[98490]: 2026-01-31 04:20:09.178142897 +0000 UTC m=+0.138603404 container start 05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:20:09 np0005603435 zen_sinoussi[98506]: 167 167
Jan 30 23:20:09 np0005603435 podman[98490]: 2026-01-31 04:20:09.181937148 +0000 UTC m=+0.142397665 container attach 05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_sinoussi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:09 np0005603435 systemd[1]: libpod-05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e.scope: Deactivated successfully.
Jan 30 23:20:09 np0005603435 podman[98490]: 2026-01-31 04:20:09.182607942 +0000 UTC m=+0.143068449 container died 05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:20:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay-cb1eef2b171c129517de4adbc2e53f6aa7ad07ab9ede13d0e85a9dd4f7d8cf23-merged.mount: Deactivated successfully.
Jan 30 23:20:09 np0005603435 podman[98490]: 2026-01-31 04:20:09.217061291 +0000 UTC m=+0.177521798 container remove 05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:20:09 np0005603435 systemd[1]: libpod-conmon-05f5a3f9aa8b26e6c7d3c2ae96bf56ed564faa5265775a680acb5f83b9bbac2e.scope: Deactivated successfully.
Jan 30 23:20:09 np0005603435 podman[98553]: 2026-01-31 04:20:09.346752252 +0000 UTC m=+0.037341472 container create a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:20:09 np0005603435 systemd[1]: Started libpod-conmon-a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049.scope.
Jan 30 23:20:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c617ad4fabebc897c8ce9a5f7d6909b79f2b48ea9f6926cda2855a0f52b581/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c617ad4fabebc897c8ce9a5f7d6909b79f2b48ea9f6926cda2855a0f52b581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c617ad4fabebc897c8ce9a5f7d6909b79f2b48ea9f6926cda2855a0f52b581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c617ad4fabebc897c8ce9a5f7d6909b79f2b48ea9f6926cda2855a0f52b581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:09 np0005603435 podman[98553]: 2026-01-31 04:20:09.427920043 +0000 UTC m=+0.118509273 container init a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:09 np0005603435 podman[98553]: 2026-01-31 04:20:09.332550668 +0000 UTC m=+0.023139918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:20:09 np0005603435 podman[98553]: 2026-01-31 04:20:09.439366398 +0000 UTC m=+0.129955628 container start a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:20:09 np0005603435 podman[98553]: 2026-01-31 04:20:09.444027858 +0000 UTC m=+0.134617098 container attach a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 30 23:20:09 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev ac365469-2fdd-4ab2-8691-5c888084f98e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:09 np0005603435 python3[98562]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:09 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:09 np0005603435 podman[98578]: 2026-01-31 04:20:09.541021458 +0000 UTC m=+0.052255632 container create 79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7 (image=quay.io/ceph/ceph:v20, name=dazzling_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:20:09 np0005603435 systemd[1]: Started libpod-conmon-79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7.scope.
Jan 30 23:20:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294bfa1879f894a66a15c99113067275de51d1ad39c554cb5d4376f9c6085be6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294bfa1879f894a66a15c99113067275de51d1ad39c554cb5d4376f9c6085be6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:09 np0005603435 podman[98578]: 2026-01-31 04:20:09.517739549 +0000 UTC m=+0.028973743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:20:09 np0005603435 podman[98578]: 2026-01-31 04:20:09.617054728 +0000 UTC m=+0.128288992 container init 79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7 (image=quay.io/ceph/ceph:v20, name=dazzling_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:09 np0005603435 podman[98578]: 2026-01-31 04:20:09.622567047 +0000 UTC m=+0.133801231 container start 79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7 (image=quay.io/ceph/ceph:v20, name=dazzling_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:20:09 np0005603435 podman[98578]: 2026-01-31 04:20:09.627270487 +0000 UTC m=+0.138504751 container attach 79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7 (image=quay.io/ceph/ceph:v20, name=dazzling_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:20:10 np0005603435 lvm[98691]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:20:10 np0005603435 lvm[98691]: VG ceph_vg1 finished
Jan 30 23:20:10 np0005603435 lvm[98688]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:20:10 np0005603435 lvm[98688]: VG ceph_vg0 finished
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 45 pg[2.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=14.396343231s) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 79.421577454s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 45 pg[2.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=14.396343231s) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown pruub 79.421577454s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.1f( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.1( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.2( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.7( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.6( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.9( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.8( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.b( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.a( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.c( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.d( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.f( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.e( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.11( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.10( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.13( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.12( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.15( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.14( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.17( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.16( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.19( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.18( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.1a( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.1c( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.1b( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.1d( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.1e( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.5( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.4( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 46 pg[2.3( empty local-lis/les=20/21 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 lvm[98693]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:20:10 np0005603435 lvm[98693]: VG ceph_vg2 finished
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3093839043' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 30 23:20:10 np0005603435 dazzling_nobel[98594]: 
Jan 30 23:20:10 np0005603435 laughing_herschel[98573]: {}
Jan 30 23:20:10 np0005603435 dazzling_nobel[98594]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 30 23:20:10 np0005603435 systemd[1]: libpod-79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7.scope: Deactivated successfully.
Jan 30 23:20:10 np0005603435 podman[98578]: 2026-01-31 04:20:10.19259561 +0000 UTC m=+0.703829804 container died 79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7 (image=quay.io/ceph/ceph:v20, name=dazzling_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:20:10 np0005603435 systemd[1]: libpod-a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049.scope: Deactivated successfully.
Jan 30 23:20:10 np0005603435 systemd[1]: libpod-a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049.scope: Consumed 1.052s CPU time.
Jan 30 23:20:10 np0005603435 podman[98553]: 2026-01-31 04:20:10.205409585 +0000 UTC m=+0.895998805 container died a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-28c617ad4fabebc897c8ce9a5f7d6909b79f2b48ea9f6926cda2855a0f52b581-merged.mount: Deactivated successfully.
Jan 30 23:20:10 np0005603435 podman[98553]: 2026-01-31 04:20:10.239809552 +0000 UTC m=+0.930398762 container remove a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:20:10 np0005603435 systemd[1]: libpod-conmon-a17d976b3cb525567a1ecf1107d37d81ccfe4e6795b61a420c1efda850ca6049.scope: Deactivated successfully.
Jan 30 23:20:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-294bfa1879f894a66a15c99113067275de51d1ad39c554cb5d4376f9c6085be6-merged.mount: Deactivated successfully.
Jan 30 23:20:10 np0005603435 podman[98578]: 2026-01-31 04:20:10.266110736 +0000 UTC m=+0.777344910 container remove 79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7 (image=quay.io/ceph/ceph:v20, name=dazzling_nobel, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:20:10 np0005603435 systemd[1]: libpod-conmon-79e8db025c676adc69c5e2a9c9074a2848741113e26f5c2210a1ad3b8dad85b7.scope: Deactivated successfully.
Jan 30 23:20:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v99: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 16 KiB/s wr, 463 op/s
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 30 23:20:10 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 992680d2-ecd2-4c19-ae02-11a2f116a1e0 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 30 23:20:10 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 47 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=47 pruub=11.859842300s) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active pruub 83.306472778s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:10 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 47 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=47 pruub=11.859842300s) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown pruub 83.306472778s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.1f( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.1e( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.1d( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.1c( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.1b( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.a( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.9( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.6( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.4( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.3( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.5( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.2( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.1( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.8( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.7( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.0( empty local-lis/les=45/47 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.c( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.b( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.d( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.f( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.10( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.11( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.15( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.17( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.16( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.1a( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.19( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.12( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.18( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.14( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.13( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 47 pg[2.e( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=20/20 les/c/f=21/21/0 sis=45) [2] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 47 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=47 pruub=13.303381920s) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active pruub 90.730644226s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 47 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=47 pruub=13.303381920s) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown pruub 90.730644226s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 30 23:20:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 30 23:20:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 30 23:20:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 30 23:20:11 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev fa33d763-9ef6-4606-bbe1-3305add4bd0e (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1c( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1d( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1f( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1a( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1b( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1e( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.18( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.19( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.7( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.6( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.5( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.3( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.a( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.8( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.4( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.b( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.2( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.9( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.c( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.e( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.f( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.d( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.10( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.11( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.15( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.13( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.12( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.14( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.16( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.17( empty local-lis/les=18/19 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1c( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1f( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1f( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1d( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1e( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.b( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.6( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1b( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.18( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1e( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1a( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.19( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.7( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.3( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.19( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1d( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.1( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.6( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.5( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.3( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.4( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.b( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.a( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.c( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.8( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.c( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.9( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.0( empty local-lis/les=47/48 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.15( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.16( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.10( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.f( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.2( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.17( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=20/21 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.d( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.11( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.e( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.14( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.16( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.12( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.13( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.15( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 48 pg[3.17( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=18/18 les/c/f=19/19/0 sis=47) [1] r=0 lpr=47 pi=[18,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1e( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.b( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.0( empty local-lis/les=47/48 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.16( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.17( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=20/20 les/c/f=21/21/0 sis=47) [0] r=0 lpr=47 pi=[20,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:11 np0005603435 ceph-mgr[75599]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Jan 30 23:20:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v102: 104 pgs: 1 peering, 62 unknown, 41 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 0 B/s wr, 489 op/s
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:12 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 30 23:20:12 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 30 23:20:12 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 555a29a8-d7c1-45d9-b28b-754e917b9d15 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 49 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=24/25 n=22 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49 pruub=8.022858620s) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 36'38 mlcod 36'38 active pruub 87.993057251s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:13 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 677b9381-bc71-48d4-afa2-cf40655d8ebd (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.0( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49 pruub=8.022858620s) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 36'38 mlcod 0'0 unknown pruub 87.993057251s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=24/25 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 50 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=24/25 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:13 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 30 23:20:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 30 23:20:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v105: 150 pgs: 1 peering, 108 unknown, 41 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 0 B/s wr, 489 op/s
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 30 23:20:14 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev fdaf7015-dd1b-4c0e-8ac4-248462e7392d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 36'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 51 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [0] r=0 lpr=49 pi=[24,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:15 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 30 23:20:15 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 30 23:20:15 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 30 23:20:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 30 23:20:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 30 23:20:15 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 30 23:20:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 30 23:20:15 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 684a7fde-a043-43ff-a910-716393139b3b (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 30 23:20:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 30 23:20:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:15 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:15 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=49 pruub=11.052305222s) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active pruub 81.981079102s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=49 pruub=11.052305222s) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown pruub 81.981079102s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.d( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.e( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.6( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.10( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.8( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.a( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.b( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.17( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.1f( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.1b( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.1c( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:15 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=22/23 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v108: 212 pgs: 1 peering, 139 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:16 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 30 23:20:16 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] update: starting ev 30ff9c2f-fd8f-4041-8cb5-b288ff096741 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 66177c99-2b89-41d1-850f-e0123af5295e (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 66177c99-2b89-41d1-850f-e0123af5295e (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 47e60d41-e1db-43e1-89b6-9fc5dd1be350 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 47e60d41-e1db-43e1-89b6-9fc5dd1be350 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev ac365469-2fdd-4ab2-8691-5c888084f98e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event ac365469-2fdd-4ab2-8691-5c888084f98e (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 992680d2-ecd2-4c19-ae02-11a2f116a1e0 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 992680d2-ecd2-4c19-ae02-11a2f116a1e0 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev fa33d763-9ef6-4606-bbe1-3305add4bd0e (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event fa33d763-9ef6-4606-bbe1-3305add4bd0e (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 555a29a8-d7c1-45d9-b28b-754e917b9d15 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 555a29a8-d7c1-45d9-b28b-754e917b9d15 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 677b9381-bc71-48d4-afa2-cf40655d8ebd (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 677b9381-bc71-48d4-afa2-cf40655d8ebd (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev fdaf7015-dd1b-4c0e-8ac4-248462e7392d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event fdaf7015-dd1b-4c0e-8ac4-248462e7392d (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 684a7fde-a043-43ff-a910-716393139b3b (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 684a7fde-a043-43ff-a910-716393139b3b (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] complete: finished ev 30ff9c2f-fd8f-4041-8cb5-b288ff096741 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 30ff9c2f-fd8f-4041-8cb5-b288ff096741 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[10.0( v 42'18 (0'0,42'18] local-lis/les=39/40 n=9 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=53 pruub=10.766615868s) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 42'17 mlcod 42'17 active pruub 82.387351990s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[10.0( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=53 pruub=10.766615868s) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 42'17 mlcod 0'0 unknown pruub 82.387351990s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.10( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.1f( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.17( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.a( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.0( empty local-lis/les=49/53 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.6( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.b( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.e( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.d( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.1b( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.8( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 53 pg[5.1c( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=22/22 les/c/f=23/23/0 sis=49) [2] r=0 lpr=49 pi=[22,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 51 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=51 pruub=15.308686256s) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active pruub 93.113632202s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 51 pg[8.0( v 36'6 (0'0,36'6] local-lis/les=35/36 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51 pruub=14.532058716s) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 36'5 mlcod 36'5 active pruub 92.337150574s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=51 pruub=15.308686256s) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown pruub 93.113632202s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.7( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.b( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.d( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.10( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.12( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.0( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51 pruub=14.532058716s) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 36'5 mlcod 0'0 unknown pruub 92.337150574s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.16( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.17( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.19( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.1d( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.14( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.1e( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=26/27 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.1( v 36'6 (0'0,36'6] local-lis/les=35/36 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.3( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.4( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.5( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.1a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.7( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.8( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.a( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.10( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.11( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.12( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.19( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.13( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.14( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.1c( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.15( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.1b( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.16( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.17( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.18( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.1d( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.1e( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 52 pg[8.1f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=35/36 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:16 np0005603435 ceph-mgr[75599]: [progress INFO root] Writing back 15 completed events
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 30 23:20:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 53 pg[9.0( v 43'1440 (0'0,43'1440] local-lis/les=37/38 n=242 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=53 pruub=8.314678192s) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 43'1439 mlcod 43'1439 active pruub 86.353401184s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 53 pg[9.0( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=53 pruub=8.314678192s) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 43'1439 mlcod 0'0 unknown pruub 86.353401184s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b36fd80 space 0x55b81ac15440 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3a6e80 space 0x55b81ace6540 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37cb00 space 0x55b81acf7a40 0x0~98 clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37d000 space 0x55b81a7f8840 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37d980 space 0x55b81a79f140 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37cb80 space 0x55b81a7f9d40 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e2f00 space 0x55b81ac07a40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b354700 space 0x55b81b7d0240 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b494300 space 0x55b81ac0c240 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4b4680 space 0x55b81b33d740 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b355d80 space 0x55b81b7d1d40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e3100 space 0x55b81ac07140 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4bbe80 space 0x55b81ac03a40 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b354100 space 0x55b81ac5c240 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b354300 space 0x55b81ac5cb40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b478a80 space 0x55b81ac9fa40 0x0~98 clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4c4180 space 0x55b81ac9bd40 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4bb880 space 0x55b81ac02240 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b355f80 space 0x55b81acbc840 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4b4100 space 0x55b81abcb440 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e2800 space 0x55b81b458540 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b354500 space 0x55b81ac5d440 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e3300 space 0x55b81ac06840 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b375e00 space 0x55b81b7d1440 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4b4c00 space 0x55b81b472240 0x0~98 clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e2a00 space 0x55b81b458e40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3a6f00 space 0x55b81ac9ae40 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b478d80 space 0x55b81ac9f140 0x0~98 clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b375280 space 0x55b81ac14540 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b36ff00 space 0x55b81a79eb40 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b375b00 space 0x55b81ac0cb40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37c680 space 0x55b81a79e240 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4a1e80 space 0x55b81ac5da40 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4b4000 space 0x55b81abca240 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4b4600 space 0x55b81abcb140 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4b4900 space 0x55b81b4fa840 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b478580 space 0x55b81b473140 0x0~98 clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b479500 space 0x55b81ac15740 0x0~98 clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37c500 space 0x55b81b4fa240 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37d900 space 0x55b81a7f9440 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4b4b00 space 0x55b81b33c540 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b354900 space 0x55b81b7d0b40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37d400 space 0x55b81a767d40 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e2100 space 0x55b81ac17d40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37d780 space 0x55b81a766540 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e2480 space 0x55b81ac8a240 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37ca80 space 0x55b81ac3d140 0x0~98 clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e2300 space 0x55b81ac16e40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3b0c00 space 0x55b81ac84e40 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b375f80 space 0x55b81acbda40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b375700 space 0x55b81a7ec540 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b375480 space 0x55b81acbd140 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b37dd00 space 0x55b81ac86840 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3a6080 space 0x55b81acbc540 0x0~9a clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4b4300 space 0x55b81abcab40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e2e00 space 0x55b81abcbd40 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3e2c00 space 0x55b81b459740 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b3fa180 space 0x55b81acbce40 0x0~98 clean)
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55b81bac3440) split_cache   moving buffer(0x55b81b4bbf00 space 0x55b81bb11440 0x0~6e clean)
Jan 30 23:20:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 30 23:20:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 30 23:20:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.12( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.11( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.10( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1f( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1e( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1d( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1c( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1b( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1a( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.19( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.18( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.6( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.5( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.7( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.4( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.3( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.8( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.f( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.a( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.b( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.c( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.d( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.e( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.2( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.13( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.14( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.15( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.16( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.9( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.17( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.12( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.11( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.10( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1e( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1f( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1c( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.18( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1a( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.19( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1b( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.5( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.7( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.4( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.8( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.f( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.3( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.6( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1d( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.15( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.14( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.17( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.16( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.c( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.0( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 42'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.d( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.e( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.a( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.13( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.2( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.14( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.b( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.11( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.10( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.13( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.15( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.12( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.17( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.9( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 54 pg[10.16( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [2] r=0 lpr=53 pi=[39,53)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.d( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.c( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.f( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.9( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.b( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.2( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.e( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.a( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.8( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.3( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.6( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.7( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.4( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.5( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1a( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1b( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.19( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1e( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.18( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1f( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1d( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1c( v 43'1440 lc 0'0 (0'0,43'1440] local-lis/les=37/38 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.16( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.19( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.17( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.1e( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.1d( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.13( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.10( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.12( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.14( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.7( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.8( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.a( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.3( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.2( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.0( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 43'1439 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=51/54 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.1( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.a( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.d( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.7( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.b( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.5( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.4( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.14( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.5( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1a( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.19( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.16( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.17( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.10( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.0( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 36'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.1e( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.12( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [1] r=0 lpr=51 pi=[26,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 54 pg[9.1c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=37/37 les/c/f=38/38/0 sis=53) [1] r=0 lpr=53 pi=[37,53)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 30 23:20:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:17 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 30 23:20:18 np0005603435 systemd[76696]: Starting Mark boot as successful...
Jan 30 23:20:18 np0005603435 systemd[76696]: Finished Mark boot as successful.
Jan 30 23:20:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v111: 274 pgs: 65 peering, 93 unknown, 116 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:20:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 30 23:20:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:18 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 30 23:20:18 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 30 23:20:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 30 23:20:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 30 23:20:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 30 23:20:18 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 55 pg[11.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=55 pruub=10.443523407s) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active pruub 90.393585205s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:18 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 55 pg[11.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=55 pruub=10.443523407s) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown pruub 90.393585205s@ mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:18 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 30 23:20:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:19 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 30 23:20:19 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 30 23:20:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 30 23:20:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 30 23:20:19 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.16( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.13( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.c( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.a( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.5( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.7( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1d( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.16( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=41/42 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.13( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.0( empty local-lis/les=55/56 n=0 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.c( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.a( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.5( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.7( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1d( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 65 peering, 62 unknown, 178 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:20:21 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 30 23:20:21 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 30 23:20:21 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 30 23:20:22 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 30 23:20:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:20:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.12( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596944809s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 active pruub 88.624633789s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.1d( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593302727s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.621017456s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.1e( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.598949432s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626892090s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.19( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.442576408s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.470542908s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.11( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596866608s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.624855042s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.19( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.442549706s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.470542908s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.1e( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.598872185s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626892090s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.11( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596790314s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.624855042s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.12( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596847534s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 88.624633789s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.10( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.600633621s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629005432s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.1d( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593236923s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.621017456s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.18( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.442078590s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.470542908s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.17( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.441839218s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.470321655s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.17( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.441815376s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.470321655s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.18( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.442035675s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.470542908s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.16( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.441677094s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.470466614s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.16( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.441637039s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.470466614s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.10( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.600152969s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629005432s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.15( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.441167831s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.470184326s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.15( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.441147804s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.470184326s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.12( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.596600533s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.625701904s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.12( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.596570969s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.625701904s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.11( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.596514702s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.625663757s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.13( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.596608162s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.625801086s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.13( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.596590042s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.625801086s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.11( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.596455574s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.625663757s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.13( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.441811562s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.471138000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.13( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.441795349s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.471138000s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.15( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.595760345s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.625862122s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.15( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.595741272s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.625862122s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.1a( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.599017143s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629180908s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.1a( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598971367s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629180908s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.19( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598899841s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629196167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.1e( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598720551s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629051208s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.14( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.596076965s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.625839233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.1e( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598697662s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629051208s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.19( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598855019s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629196167s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.14( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.595433235s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.625839233s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.11( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.439442635s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.470077515s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.11( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.439396858s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.470077515s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.16( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.595323563s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626060486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.16( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.595285416s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626060486s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.f( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.439105034s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.470069885s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.f( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.438998222s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.470069885s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.6( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598043442s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629409790s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.6( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598010063s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629409790s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.9( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.594431877s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.625885010s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.4( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597694397s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629264832s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.9( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.594389915s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.625885010s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.7( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597619057s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629234314s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.d( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.438312531s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.470001221s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.d( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.438245773s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.470001221s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.7( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597482681s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629234314s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.4( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597491264s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629264832s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.b( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.438027382s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469993591s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.b( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.437990189s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469993591s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.8( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597208023s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629280090s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.8( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597175598s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629280090s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.f( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597065926s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.629310608s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.c( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593684196s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.625953674s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.f( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597034454s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.629310608s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.7( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593757629s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.625984192s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.c( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593613625s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.625953674s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.8( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.437438965s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469909668s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.f( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593533516s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626029968s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.f( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593509674s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626029968s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.7( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.437438011s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469978333s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.8( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.437397957s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469909668s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.7( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593626022s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.625984192s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.9( v 55'19 (0'0,55'19] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598195076s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 active pruub 88.630805969s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.7( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.437401772s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469978333s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.9( v 55'19 (0'0,55'19] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.598104477s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 88.630805969s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.2( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.437108994s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469909668s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.2( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.437086105s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469909668s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.5( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593272209s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626129150s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.5( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.593235016s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626129150s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.b( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597211838s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.630355835s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.4( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.436609268s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469894409s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.b( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.597055435s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.630355835s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.4( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.592865944s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626220703s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.3( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.592885017s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626350403s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.4( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.436569214s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469894409s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.3( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.592855453s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626350403s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.4( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.592829704s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626220703s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.d( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596672058s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 active pruub 88.630409241s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.5( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.436091423s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469902039s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.d( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596620560s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 88.630409241s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.5( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.436013222s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469902039s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.e( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596409798s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 active pruub 88.630416870s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.e( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596375465s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 88.630416870s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.2( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.592087746s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626258850s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.3( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.435721397s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469902039s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.6( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.435603142s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469795227s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596513748s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.630760193s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.2( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.592040062s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626258850s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.1( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.592109680s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626388550s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.596487999s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.630760193s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.6( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.435576439s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469795227s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.3( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.435678482s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469902039s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.1( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.592065811s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626388550s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.9( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.435199738s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469787598s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.9( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.435173988s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469787598s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.2( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.595670700s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.630477905s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.2( v 42'18 (0'0,42'18] local-lis/les=53/54 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.595648766s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.630477905s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.13( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.595460892s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.630455017s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.1b( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.434771538s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469772339s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.a( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.434819221s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469825745s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.13( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.595417976s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.630455017s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.1b( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.434721947s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469772339s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.a( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.434762955s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469825745s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.14( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.595314980s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 active pruub 88.630485535s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.14( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.595273972s) [1] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 88.630485535s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.1c( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.434318542s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469673157s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.1a( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.591077805s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626586914s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.15( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.595021248s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 active pruub 88.630538940s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.15( v 55'19 (0'0,55'19] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.594980240s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 88.630538940s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.1a( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.591035843s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626586914s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.1c( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.434277534s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469673157s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.16( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.594954491s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.630744934s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.16( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.594930649s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.630744934s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.1f( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.429520607s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.465538025s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.1f( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.429490089s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.465538025s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.17( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.594611168s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 active pruub 88.630775452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[10.17( v 42'18 (0'0,42'18] local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.594566345s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 88.630775452s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.18( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.590281487s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626556396s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.18( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.590255737s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626556396s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[5.1e( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.11( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.1d( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.432762146s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 active pruub 89.469543457s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[2.1d( empty local-lis/les=45/47 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57 pruub=11.432426453s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 unknown NOTIFY pruub 89.469543457s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.17( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.19( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.590499878s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 active pruub 87.626525879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.19( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[5.19( empty local-lis/les=49/53 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57 pruub=9.588748932s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 unknown NOTIFY pruub 87.626525879s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.18( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.13( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.15( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.12( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[5.4( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.f( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.16( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.2( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.9( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[5.5( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.d( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.f( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[5.3( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.3( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.5( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.c( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.b( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.8( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.9( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.4( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.7( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.6( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.16( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[5.14( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.13( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.11( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.1d( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.2( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431918144s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792778015s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.432076454s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792976379s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431863785s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792778015s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.1( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431837082s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792961121s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.432025909s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792976379s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.12( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431627274s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792800903s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.12( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431590080s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792800903s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.11( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431272507s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792541504s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.11( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431248665s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792541504s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.f( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431118011s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792518616s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.f( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.431088448s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792518616s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.656227112s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 active pruub 105.017692566s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.10( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.430933952s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792419434s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.430910110s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792396545s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.430887222s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792396545s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.656185150s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017692566s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.10( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.430895805s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792419434s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.427013397s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788696289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426960945s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788696289s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.655961990s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 active pruub 105.017738342s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.655923843s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017738342s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.655485153s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 active pruub 105.017494202s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426556587s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788604736s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.655455589s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017494202s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.655480385s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 active pruub 105.017562866s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426494598s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788604736s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.655452728s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017562866s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.430764198s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792961121s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.4( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426136017s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788566589s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426258087s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788734436s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426172256s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788673401s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.4( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426097870s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788566589s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.654843330s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 active pruub 105.017356873s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.654815674s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017356873s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426129341s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788673401s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426231384s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788734436s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.429603577s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792381287s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.429579735s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792381287s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.654539108s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 active pruub 105.017402649s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.654499054s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017402649s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.425576210s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788566589s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.425534248s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788566589s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.620215416s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 active pruub 104.983657837s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.425045967s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788490295s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.620172501s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 104.983657837s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.7( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.424734116s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788230896s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.424960136s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788490295s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.7( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.424687386s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788230896s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.425558090s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788597107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.424813271s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788597107s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.619729996s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 active pruub 104.983589172s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57 pruub=15.619672775s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 104.983589172s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.1b( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[2.a( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.1a( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.1c( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.18( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[5.19( empty local-lis/les=0/0 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.422962189s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.788352966s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.422916412s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.788352966s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426706314s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 101.792350769s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.426673889s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 101.792350769s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.17( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.936490059s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976493835s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.17( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.936451912s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976493835s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1f( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.413844109s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.454010010s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1f( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.413785934s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.454010010s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1b( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577055931s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.617469788s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1b( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577011108s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.617469788s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577945709s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.618797302s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1e( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.417278290s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458152771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577913284s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.618797302s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1e( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.417249680s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458152771s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1a( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577721596s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.618774414s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577709198s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.618789673s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1a( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577702522s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.618774414s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577688217s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.618789673s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.18( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.15( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.935378075s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976654053s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.15( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.935358047s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976654053s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1d( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.417070389s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458381653s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.11( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1d( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.417041779s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458381653s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577429771s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.618911743s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.e( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577407837s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.618911743s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.14( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.935061455s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976806641s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577088356s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.618858337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.14( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.935009956s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976806641s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577033997s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.618858337s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1b( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.416036606s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.457992554s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1b( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.416004181s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.457992554s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1f( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576999664s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.619049072s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577027321s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.619117737s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577000618s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.619132996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1f( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576966286s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.619049072s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.1( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.12( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.934492111s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976676941s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576941490s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.619117737s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.12( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.934469223s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976676941s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.13( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.1a( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576675415s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.619056702s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.576767921s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.619132996s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576650620s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.619056702s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.11( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.934077263s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976715088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576432228s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.619110107s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.11( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.934045792s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976715088s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.576429367s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.619148254s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576396942s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.619110107s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.10( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.933971405s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976722717s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.576391220s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.619148254s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.10( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.933937073s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976722717s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.18( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.415252686s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458145142s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.1b( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.18( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.415229797s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458145142s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.18( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575895309s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.618919373s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.18( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575845718s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.618919373s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.3( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.579040527s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622154236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.f( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.933634758s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976745605s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.3( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.579014778s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622154236s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.7( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.415224075s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458396912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.f( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.933593750s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976745605s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1c( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576050758s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.619232178s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1c( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576017380s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.619232178s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.7( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.415187836s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458396912s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575900078s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.619209290s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575875282s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.619209290s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.575849533s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.619209290s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.e( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.933423042s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976821899s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.e( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.933382988s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976821899s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.578429222s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.621917725s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.2( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575716972s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.619216919s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.6( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.414877892s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458396912s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.575827599s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.619209290s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.2( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575689316s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.619216919s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.6( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.414844513s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458396912s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.578248024s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622016907s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.d( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.933036804s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976829529s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.1( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.578212738s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622016907s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.d( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.933005333s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976829529s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.5( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.414499283s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458496094s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.5( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.414471626s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458496094s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.578089714s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.622161865s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.a( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.578063965s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.622161865s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.b( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.932631493s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976844788s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.b( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.932608604s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976844788s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.3( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.414150238s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458480835s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577599525s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.621917725s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.3( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.414126396s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458480835s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577599525s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.622116089s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.9( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.932350159s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976898193s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577574730s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.622116089s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.5( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577494621s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622085571s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.5( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577442169s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622085571s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.9( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.932319641s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976898193s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.1f( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[2.1d( empty local-lis/les=0/0 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576981544s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.621948242s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.c( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577118874s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622116089s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577110291s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.622131348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.c( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.577092171s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622116089s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576943398s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.621948242s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.8( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.413533211s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458763123s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.2( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.931682587s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976951599s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.8( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.413505554s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458763123s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.2( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.931659698s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976951599s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.577075958s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.622131348s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.e( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576780319s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622207642s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.e( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576761246s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622207642s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.a( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.413202286s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458709717s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.a( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.413170815s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458709717s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.3( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.931346893s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.976997375s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.3( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.931323051s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.976997375s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.f( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576616287s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622322083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.f( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576590538s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622322083s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.576505661s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.622375488s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.576484680s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.622375488s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.8( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.931019783s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977027893s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.4( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576377869s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622451782s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.8( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.930984497s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977027893s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576566696s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.622657776s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.4( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576350212s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622451782s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576531410s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.622657776s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.6( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576400757s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622695923s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576262474s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.622810364s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.9( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576226234s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.622810364s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.413528442s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458389282s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.1( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.411655426s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458389282s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.929989815s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977043152s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.9( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.411782265s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458869934s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.9( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.411756516s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458869934s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.929958344s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977043152s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575206757s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.622360229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.f( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575140953s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.622360229s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575406075s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.622726440s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.575577736s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.622924805s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.575547218s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.622924805s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.2( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575372696s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.622726440s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.4( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.929426193s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977066040s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.4( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.929380417s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977066040s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.8( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575085640s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622856140s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.8( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.575048447s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622856140s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.c( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.410864830s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.458793640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.c( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.410836220s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.458793640s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.6( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.576383591s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622695923s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.9( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.574762344s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622879028s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.574763298s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.622917175s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.9( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.574728966s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622879028s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.6( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.574738503s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.622917175s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.574752808s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.623001099s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.6( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.928852081s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977134705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.574709892s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.623001099s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.6( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.928806305s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977134705s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.a( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.574589729s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.622947693s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.a( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.574570656s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.622947693s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.e( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.411362648s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.459808350s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.e( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.411328316s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.459808350s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[4.1c( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.1f( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.f( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.410181046s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.459327698s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.f( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.410141945s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.459327698s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.573670387s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.623115540s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=51/54 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.573637009s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.623115540s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.5( v 55'1441 (0'0,55'1441] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.573492050s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 43'1440 active pruub 94.623054504s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.5( v 55'1441 (0'0,55'1441] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.573286057s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 43'1440 unknown NOTIFY pruub 94.623054504s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.18( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.927329063s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977165222s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.18( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.927297592s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977165222s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.572861671s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.623039246s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.572832108s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.623039246s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.19( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.926762581s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977256775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.19( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.926731110s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977256775s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.15( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.572430611s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.623085022s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.11( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.409148216s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.459808350s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.15( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.572394371s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.623085022s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.11( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.409106255s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.459808350s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.572228432s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.623123169s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.572236061s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.623153687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.572203636s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.623153687s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1a( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.926316261s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977287292s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.572193146s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.623123169s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1a( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.926292419s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977287292s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1b( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.926723480s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977973938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1b( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.926679611s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977973938s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.12( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.408578873s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.459999084s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.12( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.408554077s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.459999084s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.571656227s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.623207092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1c( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.925670624s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977333069s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.571624756s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.623207092s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1c( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.925647736s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977333069s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.571278572s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.623245239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.571246147s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.623184204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.571245193s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.623245239s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.571108818s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.623275757s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.571082115s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.623275757s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.11( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.571031570s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.623268127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.11( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.571001053s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.623268127s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1e( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.925375938s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977920532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1e( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.925340652s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977920532s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.15( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.407316208s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.460052490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.16( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.407238960s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.460006714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.15( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.407283783s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.460052490s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.16( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.407199860s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.460006714s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.570463181s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.623329163s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.570425034s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.623329163s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1f( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.924932480s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 active pruub 96.977928162s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[11.1f( empty local-lis/les=55/56 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=12.924913406s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=0'0 unknown NOTIFY pruub 96.977928162s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.13( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.570228577s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 active pruub 94.623344421s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.570235252s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 active pruub 94.623359680s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[7.13( empty local-lis/les=51/54 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.570209503s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=0'0 unknown NOTIFY pruub 94.623344421s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=10.570199966s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 94.623359680s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.17( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.406849861s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 active pruub 96.460060120s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[3.17( empty local-lis/les=47/48 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=12.406829834s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 unknown NOTIFY pruub 96.460060120s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.570082664s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 active pruub 94.623359680s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.570062637s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.623359680s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.14( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.1a( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[8.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.15( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=51/54 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=10.569211960s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 unknown NOTIFY pruub 94.623184204s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.12( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.1d( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.f( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.14( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.12( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.1b( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[8.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.11( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.10( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[8.12( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.10( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.d( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.1c( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.2( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.1( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[6.1( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.d( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.4( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.9( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.2( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.5( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.b( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[8.d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.5( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.9( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[6.7( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.c( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.8( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.2( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.7( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[6.3( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.6( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.3( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 57 pg[4.8( empty local-lis/les=0/0 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[8.2( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.8( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.a( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.e( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.a( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[8.4( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.18( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[8.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.15( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.11( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.1( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.9( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.1a( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.1b( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.9( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.1c( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[7.11( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.1e( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[11.1f( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 57 pg[8.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.c( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.6( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.f( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.1a( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.12( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.15( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[3.17( empty local-lis/les=0/0 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 57 pg[8.18( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 30 23:20:23 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.1a( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[8.15( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.18( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.1a( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.1e( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.15( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.12( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[8.11( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.1d( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.3( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.8( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.c( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.1( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.d( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.7( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.1b( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.5( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.b( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.14( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.5( v 55'1441 (0'0,55'1441] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 43'1440 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.5( v 55'1441 (0'0,55'1441] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 43'1440 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[8.2( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=57/58 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.e( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.1( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.5( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.9( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.2( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.2( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.e( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.8( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.8( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.a( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[8.4( v 36'6 (0'0,36'6] local-lis/les=57/58 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.a( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.e( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.11( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.15( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[8.1b( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.18( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.1b( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.1a( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.13( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.1c( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[8.1c( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.1f( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.11( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.16( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.1e( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.11( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[8.d( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[11.11( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[8.12( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[3.18( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[4.1c( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 58 pg[7.1c( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.19( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.14( v 55'19 lc 40'7 (0'0,55'19] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=55'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.1d( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.12( v 55'19 lc 42'17 (0'0,55'19] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=55'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.13( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.1b( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.10( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.11( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.18( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.1( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.6( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.7( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.f( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.4( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.4( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.f( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.9( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.2( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.d( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[6.d( v 37'39 lc 36'13 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[6.f( v 37'39 lc 36'1 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.1a( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.c( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.a( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.5( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.2( v 42'18 (0'0,42'18] local-lis/les=57/58 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[6.5( v 37'39 lc 36'11 (0'0,37'39] local-lis/les=57/58 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.7( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[6.7( v 37'39 lc 36'21 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.1b( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.16( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=57/58 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.b( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.8( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[5.2( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.e( v 55'19 lc 40'4 (0'0,55'19] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=55'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.d( v 55'19 lc 40'5 (0'0,55'19] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=55'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[5.3( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.17( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[5.5( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.2( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.1f( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.1c( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.4( v 42'18 (0'0,42'18] local-lis/les=57/58 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.1d( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.15( v 55'19 lc 40'3 (0'0,55'19] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=55'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[5.7( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.8( v 42'18 (0'0,42'18] local-lis/les=57/58 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.9( v 55'19 lc 40'8 (0'0,55'19] local-lis/les=57/58 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=55'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[5.1e( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.19( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.18( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[5.4( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.1b( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.1f( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.10( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.f( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.4( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.5( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.b( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.3( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.f( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.d( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.c( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.8( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.6( v 42'18 (0'0,42'18] local-lis/les=57/58 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.9( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.b( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.16( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.19( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.18( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.1( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[10.1a( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [1] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.14( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.12( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.15( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.12( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.1f( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.9( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.6( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.9( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.6( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=57/58 n=1 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.f( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.7( v 42'18 (0'0,42'18] local-lis/les=57/58 n=1 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.3( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.6( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.10( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.6( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.13( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.10( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[2.17( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[4.9( empty local-lis/les=57/58 n=0 ec=47/20 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.e( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[5.11( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.c( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.3( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.f( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.17( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.a( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.9( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[7.13( empty local-lis/les=57/58 n=0 ec=51/26 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.16( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[10.1e( v 42'18 (0'0,42'18] local-lis/les=57/58 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.1d( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.15( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.18( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[3.12( empty local-lis/les=57/58 n=0 ec=47/18 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[5.14( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[5.15( empty local-lis/les=57/58 n=0 ec=49/22 lis/c=49/49 les/c/f=53/53/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.13( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=57/58 n=0 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.1a( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[2.11( empty local-lis/les=57/58 n=0 ec=45/16 lis/c=45/45 les/c/f=47/47/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.f( v 36'6 lc 0'0 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 58 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 58 pg[8.1f( v 36'6 (0'0,36'6] local-lis/les=57/58 n=0 ec=51/35 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=36'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 30 23:20:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 30 23:20:24 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 30 23:20:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 30 23:20:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 30 23:20:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 30 23:20:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 30 23:20:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 30 23:20:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 59 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.606527328s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 active pruub 104.983612061s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 59 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.606420517s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 104.983612061s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 59 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.639940262s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 active pruub 105.017349243s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 59 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.639885902s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017349243s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 59 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.639348030s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 active pruub 105.017532349s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 59 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.639318466s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017532349s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 59 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.638941765s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 active pruub 105.017494202s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:25 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 59 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59 pruub=13.638902664s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 105.017494202s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:25 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 30 23:20:25 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=13}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.5( v 55'1441 (0'0,55'1441] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=55'1441 lcod 43'1440 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:25 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 59 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60 pruub=15.117921829s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=43'1440 lcod 0'0 active pruub 102.185905457s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60 pruub=15.117837906s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.185905457s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60 pruub=15.117654800s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=43'1440 lcod 0'0 active pruub 102.185966492s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60 pruub=15.117473602s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=43'1440 lcod 0'0 active pruub 102.185859680s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60 pruub=15.117574692s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.185966492s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60 pruub=15.117278099s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.185859680s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=59/60 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=59/60 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[6.e( v 37'39 lc 36'19 (0'0,37'39] local-lis/les=59/60 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:26 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 60 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=59/60 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:26 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 60 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:26 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 60 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:26 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 60 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:26 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 60 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:26 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 60 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:26 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 60 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 30 23:20:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 108 B/s, 0 objects/s recovering
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 30 23:20:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 30 23:20:26 np0005603435 ceph-mgr[75599]: [progress INFO root] Completed event 787a4be9-4e74-480f-a3af-0c3a4bf1bb3f (Global Recovery Event) in 15 seconds
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.111041069s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186172485s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.110910416s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186172485s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.110869408s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186203003s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.110765457s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186203003s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.110409737s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186256409s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.110336304s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186256409s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.109725952s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186126709s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.109627724s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186126709s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.109632492s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186187744s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.973135948s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=37'39 active pruub 101.049713135s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.109467506s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186187744s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.109394073s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186264038s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.972833633s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=37'39 unknown NOTIFY pruub 101.049713135s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.109316826s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186264038s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.109291077s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186401367s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.108871460s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186027527s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.109246254s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186401367s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.108766556s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186027527s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.972446442s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=37'39 active pruub 101.049896240s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.972402573s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=37'39 unknown NOTIFY pruub 101.049896240s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.108660698s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186210632s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.108523369s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186210632s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.972522736s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=37'39 active pruub 101.050292969s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.972486496s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=37'39 unknown NOTIFY pruub 101.050292969s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.108048439s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186042786s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.107823372s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.185920715s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.107975006s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186042786s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.107766151s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.185920715s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.979051590s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=37'39 active pruub 101.057289124s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.978981972s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=37'39 unknown NOTIFY pruub 101.057289124s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.5( v 59'1443 (0'0,59'1443] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.107702255s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=55'1441 lcod 59'1442 active pruub 102.186286926s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.107324600s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 active pruub 102.186096191s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.5( v 59'1443 (0'0,59'1443] local-lis/les=58/59 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.107539177s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=55'1441 lcod 59'1442 unknown NOTIFY pruub 102.186286926s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 61 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=58/59 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61 pruub=14.107195854s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 102.186096191s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[6.3( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[6.7( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.5( v 59'1443 (0'0,59'1443] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=55'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.5( v 59'1443 (0'0,59'1443] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=55'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.1d( v 43'1440 (0'0,43'1440] local-lis/les=60/61 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.1b( v 43'1440 (0'0,43'1440] local-lis/les=60/61 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 61 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=60/61 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 30 23:20:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 30 23:20:27 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 30 23:20:27 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 30 23:20:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 30 23:20:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 30 23:20:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.19( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.d( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.3( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.9( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=61/62 n=2 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=61/62 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.1( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.b( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.13( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.5( v 59'1443 (0'0,59'1443] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=59'1443 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[6.f( v 37'39 lc 36'1 (0'0,37'39] local-lis/les=61/62 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[6.7( v 37'39 lc 36'21 (0'0,37'39] local-lis/les=61/62 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 62 pg[9.11( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=58/53 les/c/f=59/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 3 peering, 1 active+clean+scrubbing, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 596 B/s, 8 objects/s recovering
Jan 30 23:20:28 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 30 23:20:28 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 30 23:20:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 374 B/s, 6 objects/s recovering
Jan 30 23:20:30 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 30 23:20:30 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 30 23:20:31 np0005603435 ceph-mgr[75599]: [progress INFO root] Writing back 16 completed events
Jan 30 23:20:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 30 23:20:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s, 3 keys/s, 23 objects/s recovering
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 30 23:20:32 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 63 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=63 pruub=13.693193436s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=37'39 lcod 0'0 active pruub 112.984107971s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:32 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 63 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=49/51 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=63 pruub=13.693025589s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 112.984107971s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:32 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 63 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=63 pruub=13.725774765s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=37'39 lcod 0'0 active pruub 113.017898560s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:32 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 63 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=63 pruub=13.725632668s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 113.017898560s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 30 23:20:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 30 23:20:32 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 63 pg[6.c( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=63) [1] r=0 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:33 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 63 pg[6.4( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=63) [1] r=0 lpr=63 pi=[49,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 30 23:20:34 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 64 pg[6.4( v 37'39 lc 36'15 (0'0,37'39] local-lis/les=63/64 n=2 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=63) [1] r=0 lpr=63 pi=[49,63)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:34 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 64 pg[6.c( v 37'39 lc 36'17 (0'0,37'39] local-lis/les=63/64 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=63) [1] r=0 lpr=63 pi=[49,63)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 798 B/s, 3 keys/s, 18 objects/s recovering
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 30 23:20:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 30 23:20:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 30 23:20:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 30 23:20:34 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 30 23:20:34 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 30 23:20:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 30 23:20:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 30 23:20:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 30 23:20:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 30 23:20:35 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 30 23:20:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 30 23:20:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 30 23:20:35 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 65 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.319276810s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=37'39 active pruub 109.049903870s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:35 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 65 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.319193840s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=37'39 unknown NOTIFY pruub 109.049903870s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:35 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 65 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.319054604s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=37'39 active pruub 109.050407410s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:35 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 65 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:35 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 65 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=57/58 n=2 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=65 pruub=12.318706512s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=37'39 unknown NOTIFY pruub 109.050407410s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:35 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 65 pg[6.5( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 30 23:20:36 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 66 pg[6.5( v 37'39 lc 36'11 (0'0,37'39] local-lis/les=65/66 n=2 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:36 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 66 pg[6.d( v 37'39 lc 36'13 (0'0,37'39] local-lis/les=65/66 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v132: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 30 23:20:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 30 23:20:36 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 30 23:20:36 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 30 23:20:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:20:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:20:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:20:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:20:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:20:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:20:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 30 23:20:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 30 23:20:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 30 23:20:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 30 23:20:37 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 30 23:20:37 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 30 23:20:37 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 30 23:20:37 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 67 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=12.189730644s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=43'1440 lcod 0'0 active pruub 110.619293213s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:37 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 67 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=12.189615250s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 110.619293213s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:37 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:37 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 67 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=12.192975044s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=43'1440 lcod 0'0 active pruub 110.623924255s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:37 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 67 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=12.192921638s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 110.623924255s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:37 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 67 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=12.192347527s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=43'1440 lcod 0'0 active pruub 110.623779297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:37 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:37 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 67 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=12.192139626s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 110.623779297s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:37 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:37 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 67 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=12.192494392s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=43'1440 lcod 0'0 active pruub 110.624816895s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:37 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 67 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=12.192449570s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 110.624816895s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:37 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:37 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 30 23:20:37 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 30 23:20:38 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 68 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:38 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 68 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:38 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 68 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:38 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 68 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[53,68)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:38 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 68 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:38 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 68 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:38 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 68 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:38 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 68 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 30 23:20:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 445 B/s, 2 objects/s recovering
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 30 23:20:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 30 23:20:39 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 30 23:20:39 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 69 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69 pruub=12.307373047s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=43'1440 active pruub 118.411003113s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:39 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 69 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69 pruub=12.307314873s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=43'1440 unknown NOTIFY pruub 118.411003113s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:39 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 69 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69 pruub=12.306808472s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=43'1440 active pruub 118.410675049s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:39 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 69 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69 pruub=12.306783676s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=43'1440 unknown NOTIFY pruub 118.410675049s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:39 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 69 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69 pruub=12.306513786s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=43'1440 active pruub 118.410614014s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:39 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 69 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69 pruub=12.306489944s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=43'1440 unknown NOTIFY pruub 118.410614014s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:39 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69) [2] r=0 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:39 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 69 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=60/61 n=7 ec=53/37 lis/c=60/60 les/c/f=61/61/0 sis=69 pruub=11.310826302s) [2] r=-1 lpr=69 pi=[60,69)/1 crt=43'1440 active pruub 117.415908813s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:39 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 69 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=60/61 n=7 ec=53/37 lis/c=60/60 les/c/f=61/61/0 sis=69 pruub=11.310751915s) [2] r=-1 lpr=69 pi=[60,69)/1 crt=43'1440 unknown NOTIFY pruub 117.415908813s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:39 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69) [2] r=0 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:39 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=69) [2] r=0 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:39 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=60/60 les/c/f=61/61/0 sis=69) [2] r=0 lpr=69 pi=[60,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:39 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 69 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:39 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 69 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:39 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 69 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:39 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 69 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[53,68)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 30 23:20:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 30 23:20:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 30 23:20:40 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 70 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 70 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 70 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:40 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 70 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:40 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 70 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 70 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:40 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 70 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=60/61 n=7 ec=53/37 lis/c=60/60 les/c/f=61/61/0 sis=70) [2]/[0] r=0 lpr=70 pi=[60,70)/1 crt=43'1440 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 70 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=60/61 n=7 ec=53/37 lis/c=60/60 les/c/f=61/61/0 sis=70) [2]/[0] r=0 lpr=70 pi=[60,70)/1 crt=43'1440 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=60/60 les/c/f=61/61/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[60,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[61,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=60/60 les/c/f=61/61/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[60,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[61,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[61,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[61,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 70 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70 pruub=15.676263809s) [2] async=[2] r=-1 lpr=70 pi=[53,70)/1 crt=43'1440 lcod 0'0 active pruub 116.786224365s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 70 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70 pruub=15.676176071s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 116.786224365s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[61,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[61,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:40 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 70 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70 pruub=15.674889565s) [2] async=[2] r=-1 lpr=70 pi=[53,70)/1 crt=43'1440 lcod 0'0 active pruub 116.786216736s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 70 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70 pruub=15.674467087s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 116.786216736s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 70 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:40 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 70 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70 pruub=15.705142975s) [2] async=[2] r=-1 lpr=70 pi=[53,70)/1 crt=43'1440 lcod 0'0 active pruub 116.818176270s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:40 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 70 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70 pruub=15.705040932s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 116.818176270s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 4 unknown, 1 active+clean+scrubbing, 300 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 154 B/s, 0 objects/s recovering
Jan 30 23:20:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 30 23:20:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 30 23:20:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 30 23:20:41 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 71 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=71) [2] r=0 lpr=71 pi=[53,71)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:41 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 71 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=71) [2] r=0 lpr=71 pi=[53,71)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:41 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 71 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=71 pruub=14.596689224s) [2] async=[2] r=-1 lpr=71 pi=[53,71)/1 crt=43'1440 lcod 0'0 active pruub 116.818244934s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:41 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 71 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=68/69 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=71 pruub=14.596587181s) [2] r=-1 lpr=71 pi=[53,71)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 116.818244934s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:41 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 71 pg[9.e( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:41 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 71 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:41 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 71 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:41 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 71 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=8 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:41 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 71 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=60/60 les/c/f=61/61/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[60,70)/1 crt=43'1440 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:41 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 71 pg[9.1e( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=70) [2] r=0 lpr=70 pi=[53,70)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:41 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 71 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[61,70)/1 crt=43'1440 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:41 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 30 23:20:41 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 30 23:20:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 30 23:20:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 30 23:20:42 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=70/60 les/c/f=71/61/0 sis=72) [2] r=0 lpr=72 pi=[60,72)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=70/60 les/c/f=71/61/0 sis=72) [2] r=0 lpr=72 pi=[60,72)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 72 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72 pruub=15.012476921s) [2] async=[2] r=-1 lpr=72 pi=[61,72)/1 crt=43'1440 active pruub 123.555351257s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 72 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72 pruub=15.012305260s) [2] r=-1 lpr=72 pi=[61,72)/1 crt=43'1440 unknown NOTIFY pruub 123.555351257s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 72 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72 pruub=15.012260437s) [2] async=[2] r=-1 lpr=72 pi=[61,72)/1 crt=43'1440 active pruub 123.555404663s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 72 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/60 les/c/f=71/61/0 sis=72 pruub=15.012034416s) [2] async=[2] r=-1 lpr=72 pi=[60,72)/1 crt=43'1440 active pruub 123.555374146s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 72 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72 pruub=15.012012482s) [2] async=[2] r=-1 lpr=72 pi=[61,72)/1 crt=43'1440 active pruub 123.555389404s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 72 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72 pruub=15.012156487s) [2] r=-1 lpr=72 pi=[61,72)/1 crt=43'1440 unknown NOTIFY pruub 123.555404663s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 72 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/60 les/c/f=71/61/0 sis=72 pruub=15.011947632s) [2] r=-1 lpr=72 pi=[60,72)/1 crt=43'1440 unknown NOTIFY pruub 123.555374146s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 72 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72 pruub=15.011899948s) [2] r=-1 lpr=72 pi=[61,72)/1 crt=43'1440 unknown NOTIFY pruub 123.555389404s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:42 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 72 pg[9.6( v 43'1440 (0'0,43'1440] local-lis/les=71/72 n=8 ec=53/37 lis/c=68/53 les/c/f=69/54/0 sis=71) [2] r=0 lpr=71 pi=[53,71)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 1 active+recovery_wait+remapped, 2 active+remapped, 1 active+recovering+remapped, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9/282 objects misplaced (3.191%); 175 B/s, 4 objects/s recovering
Jan 30 23:20:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 30 23:20:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 30 23:20:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 30 23:20:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 30 23:20:42 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 30 23:20:43 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 73 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=73 pruub=11.466723442s) [2] r=-1 lpr=73 pi=[49,73)/1 crt=37'39 lcod 0'0 active pruub 121.017944336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:43 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 73 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=49/51 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=73 pruub=11.466666222s) [2] r=-1 lpr=73 pi=[49,73)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 121.017944336s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:43 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 73 pg[6.8( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 30 23:20:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 30 23:20:43 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 73 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=72/73 n=7 ec=53/37 lis/c=70/60 les/c/f=71/61/0 sis=72) [2] r=0 lpr=72 pi=[60,72)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:43 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 73 pg[9.17( v 43'1440 (0'0,43'1440] local-lis/les=72/73 n=7 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:43 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 73 pg[9.f( v 43'1440 (0'0,43'1440] local-lis/les=72/73 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:43 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 73 pg[9.7( v 43'1440 (0'0,43'1440] local-lis/les=72/73 n=8 ec=53/37 lis/c=70/61 les/c/f=71/62/0 sis=72) [2] r=0 lpr=72 pi=[61,72)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:43 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 73 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=14.390373230s) [2] r=-1 lpr=73 pi=[53,73)/1 crt=43'1440 lcod 0'0 active pruub 118.623786926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:43 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 73 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=14.390334129s) [2] r=-1 lpr=73 pi=[53,73)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 118.623786926s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:43 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 73 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=14.390881538s) [2] r=-1 lpr=73 pi=[53,73)/1 crt=43'1440 lcod 0'0 active pruub 118.624732971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:43 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 73 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=14.390865326s) [2] r=-1 lpr=73 pi=[53,73)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 118.624732971s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:43 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=73) [2] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:43 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=73) [2] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 30 23:20:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 30 23:20:44 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 30 23:20:44 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 30 23:20:44 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[53,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:44 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[53,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:44 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[53,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:44 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[53,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:44 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 74 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] r=0 lpr=74 pi=[53,74)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:44 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 30 23:20:44 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 74 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] r=0 lpr=74 pi=[53,74)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:44 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 74 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] r=0 lpr=74 pi=[53,74)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:44 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 74 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] r=0 lpr=74 pi=[53,74)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:44 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 74 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=73/74 n=1 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[49,73)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 1 active+recovery_wait+remapped, 2 active+remapped, 1 active+recovering+remapped, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.0 KiB/s wr, 118 op/s; 9/282 objects misplaced (3.191%); 466 B/s, 11 objects/s recovering
Jan 30 23:20:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 30 23:20:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 30 23:20:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 30 23:20:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 30 23:20:45 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 30 23:20:45 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 30 23:20:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 30 23:20:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 30 23:20:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 30 23:20:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 30 23:20:45 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 30 23:20:45 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 30 23:20:45 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 75 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=75 pruub=10.985744476s) [0] r=-1 lpr=75 pi=[57,75)/1 crt=37'39 lcod 0'0 active pruub 117.057098389s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:45 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 75 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=75 pruub=10.985676765s) [0] r=-1 lpr=75 pi=[57,75)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 117.057098389s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 30 23:20:45 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 75 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=74/75 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[53,74)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:45 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 75 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=74/75 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[53,74)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:45 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 75 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=75) [0] r=0 lpr=75 pi=[57,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 30 23:20:46 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 76 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=74/75 n=8 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76 pruub=15.004263878s) [2] async=[2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1440 lcod 0'0 active pruub 122.077217102s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:46 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 76 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=74/75 n=8 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76 pruub=15.004107475s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 122.077217102s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:46 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 76 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=74/75 n=7 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76 pruub=14.999465942s) [2] async=[2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1440 lcod 0'0 active pruub 122.073440552s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 30 23:20:46 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 76 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=74/75 n=7 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76 pruub=14.998775482s) [2] r=-1 lpr=76 pi=[53,76)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 122.073440552s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:46 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 76 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76) [2] r=0 lpr=76 pi=[53,76)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:46 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 76 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76) [2] r=0 lpr=76 pi=[53,76)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:46 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 76 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=75/76 n=1 ec=49/24 lis/c=57/57 les/c/f=58/58/0 sis=75) [0] r=0 lpr=75 pi=[57,75)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:46 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 76 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76) [2] r=0 lpr=76 pi=[53,76)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:46 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 76 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76) [2] r=0 lpr=76 pi=[53,76)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 30 23:20:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 322 B/s, 9 objects/s recovering
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 30 23:20:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 30 23:20:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 30 23:20:47 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 30 23:20:47 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 30 23:20:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 30 23:20:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 30 23:20:47 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 30 23:20:47 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 30 23:20:47 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 77 pg[9.8( v 43'1440 (0'0,43'1440] local-lis/les=76/77 n=8 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76) [2] r=0 lpr=76 pi=[53,76)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:47 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 77 pg[9.18( v 43'1440 (0'0,43'1440] local-lis/les=76/77 n=7 ec=53/37 lis/c=74/53 les/c/f=75/54/0 sis=76) [2] r=0 lpr=76 pi=[53,76)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:47 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 30 23:20:47 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 30 23:20:47 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 30 23:20:47 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 77 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=59/60 n=1 ec=49/24 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=10.459820747s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=37'39 lcod 0'0 active pruub 119.070091248s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:47 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 77 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=59/60 n=1 ec=49/24 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=10.459633827s) [0] r=-1 lpr=77 pi=[59,77)/1 crt=37'39 lcod 0'0 unknown NOTIFY pruub 119.070091248s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:47 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 77 pg[6.a( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:47 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 30 23:20:47 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 30 23:20:48 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 30 23:20:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 305 B/s, 8 objects/s recovering
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 30 23:20:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 30 23:20:48 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 30 23:20:48 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 78 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=61/62 n=1 ec=49/24 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=11.671164513s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=37'39 active pruub 126.410919189s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:48 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 78 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=61/62 n=1 ec=49/24 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=11.671091080s) [1] r=-1 lpr=78 pi=[61,78)/1 crt=37'39 unknown NOTIFY pruub 126.410919189s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:48 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:48 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 30 23:20:48 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 78 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=77/78 n=1 ec=49/24 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 30 23:20:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 30 23:20:49 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 30 23:20:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 30 23:20:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 30 23:20:49 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 79 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=78/79 n=1 ec=49/24 lis/c=61/61 les/c/f=62/62/0 sis=78) [1] r=0 lpr=78 pi=[61,78)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 125 B/s, 3 objects/s recovering
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 30 23:20:50 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 30 23:20:50 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 30 23:20:50 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 30 23:20:50 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 80 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=80 pruub=15.111346245s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=43'1440 lcod 0'0 active pruub 126.622528076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:50 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 80 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=80 pruub=15.111285210s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 126.622528076s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:50 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:50 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 80 pg[9.1c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=80 pruub=15.112814903s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=43'1440 lcod 0'0 active pruub 126.625190735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:50 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 80 pg[9.1c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=80 pruub=15.112650871s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 126.625190735s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:50 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 30 23:20:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 30 23:20:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 30 23:20:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 30 23:20:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 30 23:20:51 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[53,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:51 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[53,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:51 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[53,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:51 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[53,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:51 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 81 pg[9.1c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] r=0 lpr=81 pi=[53,81)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:51 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 81 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] r=0 lpr=81 pi=[53,81)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:51 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 81 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] r=0 lpr=81 pi=[53,81)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:51 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 81 pg[9.1c( v 43'1440 (0'0,43'1440] local-lis/les=53/54 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] r=0 lpr=81 pi=[53,81)/1 crt=43'1440 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:51 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 30 23:20:51 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 30 23:20:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 2 remapped+peering, 303 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 131 B/s, 4 objects/s recovering
Jan 30 23:20:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 30 23:20:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 30 23:20:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 30 23:20:52 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 30 23:20:52 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 30 23:20:52 np0005603435 python3[98773]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:20:52 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 30 23:20:52 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 30 23:20:52 np0005603435 podman[98774]: 2026-01-31 04:20:52.740723811 +0000 UTC m=+0.063682828 container create 32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666 (image=quay.io/ceph/ceph:v20, name=funny_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:20:52 np0005603435 systemd[1]: Started libpod-conmon-32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666.scope.
Jan 30 23:20:52 np0005603435 podman[98774]: 2026-01-31 04:20:52.714972889 +0000 UTC m=+0.037931946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:20:52 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487f5f13d4eff14bd1ff0c8bb843278fdb354d05fce33850e3763efa3df5004/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487f5f13d4eff14bd1ff0c8bb843278fdb354d05fce33850e3763efa3df5004/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:52 np0005603435 podman[98774]: 2026-01-31 04:20:52.844441076 +0000 UTC m=+0.167400113 container init 32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666 (image=quay.io/ceph/ceph:v20, name=funny_johnson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:20:52 np0005603435 podman[98774]: 2026-01-31 04:20:52.852392964 +0000 UTC m=+0.175352011 container start 32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666 (image=quay.io/ceph/ceph:v20, name=funny_johnson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:20:52 np0005603435 podman[98774]: 2026-01-31 04:20:52.857334777 +0000 UTC m=+0.180293794 container attach 32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666 (image=quay.io/ceph/ceph:v20, name=funny_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:20:52 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 82 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=81/82 n=8 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[53,81)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:52 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 82 pg[9.1c( v 43'1440 (0'0,43'1440] local-lis/les=81/82 n=7 ec=53/37 lis/c=53/53 les/c/f=54/54/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[53,81)/1 crt=43'1440 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:53 np0005603435 funny_johnson[98789]: could not fetch user info: no user info saved
Jan 30 23:20:53 np0005603435 systemd[1]: libpod-32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666.scope: Deactivated successfully.
Jan 30 23:20:53 np0005603435 podman[98774]: 2026-01-31 04:20:53.170385149 +0000 UTC m=+0.493344246 container died 32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666 (image=quay.io/ceph/ceph:v20, name=funny_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:20:53 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0487f5f13d4eff14bd1ff0c8bb843278fdb354d05fce33850e3763efa3df5004-merged.mount: Deactivated successfully.
Jan 30 23:20:53 np0005603435 podman[98774]: 2026-01-31 04:20:53.223498443 +0000 UTC m=+0.546457490 container remove 32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666 (image=quay.io/ceph/ceph:v20, name=funny_johnson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:20:53 np0005603435 systemd[1]: libpod-conmon-32be38adfb8045c10c2c51002d5c0d91192d235e9ed4696d32f6d98db8be8666.scope: Deactivated successfully.
Jan 30 23:20:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 30 23:20:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 30 23:20:53 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 30 23:20:53 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 83 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=81/82 n=8 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83 pruub=15.531288147s) [2] async=[2] r=-1 lpr=83 pi=[53,83)/1 crt=43'1440 lcod 0'0 active pruub 129.912933350s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:53 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 83 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=81/82 n=8 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83 pruub=15.531159401s) [2] r=-1 lpr=83 pi=[53,83)/1 crt=43'1440 lcod 0'0 unknown NOTIFY pruub 129.912933350s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:53 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 83 pg[9.1c( v 82'1442 (0'0,82'1442] local-lis/les=81/82 n=7 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83 pruub=15.531686783s) [2] async=[2] r=-1 lpr=83 pi=[53,83)/1 crt=82'1441 lcod 82'1441 active pruub 129.915023804s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:53 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 83 pg[9.1c( v 82'1442 (0'0,82'1442] local-lis/les=81/82 n=7 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83 pruub=15.531579971s) [2] r=-1 lpr=83 pi=[53,83)/1 crt=82'1441 lcod 82'1441 unknown NOTIFY pruub 129.915023804s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:53 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 83 pg[9.1c( v 82'1442 (0'0,82'1442] local-lis/les=0/0 n=7 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 pct=0'0 crt=82'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:53 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 83 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:53 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 83 pg[9.1c( v 82'1442 (0'0,82'1442] local-lis/les=0/0 n=7 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 crt=82'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:53 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 83 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=8 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:53 np0005603435 python3[98912]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 95d2f419-0dd0-56f2-a094-353f8c7597ed -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:20:53 np0005603435 podman[98913]: 2026-01-31 04:20:53.658491954 +0000 UTC m=+0.061166616 container create 1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593 (image=quay.io/ceph/ceph:v20, name=jovial_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:20:53 np0005603435 systemd[1]: Started libpod-conmon-1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593.scope.
Jan 30 23:20:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:20:53 np0005603435 podman[98913]: 2026-01-31 04:20:53.631631654 +0000 UTC m=+0.034306376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 30 23:20:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d4546b42a2ce971e2ecd95d658a9af25f45a8babe23505de304164f067dad3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d4546b42a2ce971e2ecd95d658a9af25f45a8babe23505de304164f067dad3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:20:53 np0005603435 podman[98913]: 2026-01-31 04:20:53.744487756 +0000 UTC m=+0.147162428 container init 1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593 (image=quay.io/ceph/ceph:v20, name=jovial_roentgen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:20:53 np0005603435 podman[98913]: 2026-01-31 04:20:53.75269875 +0000 UTC m=+0.155373422 container start 1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593 (image=quay.io/ceph/ceph:v20, name=jovial_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:20:53 np0005603435 podman[98913]: 2026-01-31 04:20:53.757289735 +0000 UTC m=+0.159964497 container attach 1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593 (image=quay.io/ceph/ceph:v20, name=jovial_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:20:54 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 30 23:20:54 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 30 23:20:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 2 remapped+peering, 303 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 30 23:20:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 30 23:20:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 30 23:20:54 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 30 23:20:54 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 84 pg[9.c( v 43'1440 (0'0,43'1440] local-lis/les=83/84 n=8 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:54 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 84 pg[9.1c( v 82'1442 (0'0,82'1442] local-lis/les=83/84 n=7 ec=53/37 lis/c=81/53 les/c/f=82/54/0 sis=83) [2] r=0 lpr=83 pi=[53,83)/1 crt=82'1442 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 30 23:20:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]: {
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "user_id": "openstack",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "display_name": "openstack",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "email": "",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "suspended": 0,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "max_buckets": 1000,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "subusers": [],
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "keys": [
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        {
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:            "user": "openstack",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:            "access_key": "DE20FHHL983PO9BV09O5",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:            "secret_key": "0IERkFxBnATrLrBbgW0rR3dbtRXeJCVTzsKkWUxX",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:            "active": true,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:            "create_date": "2026-01-31T04:20:54.498534Z"
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        }
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    ],
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "swift_keys": [],
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "caps": [],
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "op_mask": "read, write, delete",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "default_placement": "",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "default_storage_class": "",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "placement_tags": [],
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "bucket_quota": {
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "enabled": false,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "check_on_raw": false,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "max_size": -1,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "max_size_kb": 0,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "max_objects": -1
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    },
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "user_quota": {
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "enabled": false,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "check_on_raw": false,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "max_size": -1,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "max_size_kb": 0,
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:        "max_objects": -1
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    },
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "temp_url_keys": [],
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "type": "rgw",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "mfa_ids": [],
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "account_id": "",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "path": "/",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "create_date": "2026-01-31T04:20:54.497929Z",
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "tags": [],
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]:    "group_ids": []
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]: }
Jan 30 23:20:54 np0005603435 jovial_roentgen[98928]: 
Jan 30 23:20:54 np0005603435 systemd[1]: libpod-1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593.scope: Deactivated successfully.
Jan 30 23:20:54 np0005603435 podman[98913]: 2026-01-31 04:20:54.540369691 +0000 UTC m=+0.943044363 container died 1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593 (image=quay.io/ceph/ceph:v20, name=jovial_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 30 23:20:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-52d4546b42a2ce971e2ecd95d658a9af25f45a8babe23505de304164f067dad3-merged.mount: Deactivated successfully.
Jan 30 23:20:54 np0005603435 podman[98913]: 2026-01-31 04:20:54.587866885 +0000 UTC m=+0.990541547 container remove 1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593 (image=quay.io/ceph/ceph:v20, name=jovial_roentgen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:20:54 np0005603435 systemd[1]: libpod-conmon-1fbb1a05d3a0fb22b25ec8bc406ddde4b023a808952b11cd72642d5fbe8b7593.scope: Deactivated successfully.
Jan 30 23:20:54 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 30 23:20:54 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 30 23:20:55 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 30 23:20:55 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 30 23:20:55 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 30 23:20:55 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 30 23:20:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 415 B/s wr, 20 op/s; 84 B/s, 3 objects/s recovering
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 30 23:20:56 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 30 23:20:56 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 30 23:20:56 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 85 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=49/24 lis/c=65/65 les/c/f=66/66/0 sis=85 pruub=11.636038780s) [1] r=-1 lpr=85 pi=[65,85)/1 crt=37'39 active pruub 134.374572754s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:20:56 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 85 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=65/66 n=1 ec=49/24 lis/c=65/65 les/c/f=66/66/0 sis=85 pruub=11.635948181s) [1] r=-1 lpr=85 pi=[65,85)/1 crt=37'39 unknown NOTIFY pruub 134.374572754s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:20:56 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=65/65 les/c/f=66/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 30 23:20:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 30 23:20:56 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 30 23:20:56 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 30 23:20:57 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 30 23:20:57 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 30 23:20:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 30 23:20:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 30 23:20:57 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 30 23:20:57 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 30 23:20:57 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 30 23:20:57 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 86 pg[6.d( v 37'39 lc 36'13 (0'0,37'39] local-lis/les=85/86 n=1 ec=49/24 lis/c=65/65 les/c/f=66/66/0 sis=85) [1] r=0 lpr=85 pi=[65,85)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:20:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 418 B/s wr, 57 op/s; 85 B/s, 2 objects/s recovering
Jan 30 23:20:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:20:59 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 30 23:20:59 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 30 23:20:59 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 30 23:20:59 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 30 23:21:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 341 B/s wr, 47 op/s; 69 B/s, 2 objects/s recovering
Jan 30 23:21:00 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 30 23:21:00 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 30 23:21:01 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 30 23:21:01 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 30 23:21:01 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 30 23:21:01 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 30 23:21:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 259 B/s wr, 35 op/s; 61 B/s, 2 objects/s recovering
Jan 30 23:21:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 30 23:21:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 30 23:21:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 30 23:21:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 30 23:21:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 30 23:21:03 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 30 23:21:03 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 30 23:21:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 30 23:21:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 30 23:21:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 30 23:21:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 30 23:21:03 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 30 23:21:03 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 30 23:21:03 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 30 23:21:03 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 30 23:21:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:04 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 30 23:21:04 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 30 23:21:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 11 op/s; 8 B/s, 0 objects/s recovering
Jan 30 23:21:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 30 23:21:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 30 23:21:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 30 23:21:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 30 23:21:04 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 30 23:21:04 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 30 23:21:04 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 30 23:21:04 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 30 23:21:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 30 23:21:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 30 23:21:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 30 23:21:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 30 23:21:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 30 23:21:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 30 23:21:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 30 23:21:05 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 88 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=61/62 n=1 ec=49/24 lis/c=61/61 les/c/f=62/62/0 sis=88 pruub=10.951122284s) [2] r=-1 lpr=88 pi=[61,88)/1 crt=37'39 active pruub 142.411666870s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:05 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 88 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=61/62 n=1 ec=49/24 lis/c=61/61 les/c/f=62/62/0 sis=88 pruub=10.951027870s) [2] r=-1 lpr=88 pi=[61,88)/1 crt=37'39 unknown NOTIFY pruub 142.411666870s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:05 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=61/61 les/c/f=62/62/0 sis=88) [2] r=0 lpr=88 pi=[61,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:05 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 30 23:21:05 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 30 23:21:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 30 23:21:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 30 23:21:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 30 23:21:06 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 89 pg[6.f( v 37'39 lc 36'1 (0'0,37'39] local-lis/les=88/89 n=1 ec=49/24 lis/c=61/61 les/c/f=62/62/0 sis=88) [2] r=0 lpr=88 pi=[61,88)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 30 23:21:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:21:06
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'backups', '.rgw.root', 'images', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Jan 30 23:21:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 30 23:21:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:21:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:21:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 30 23:21:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 30 23:21:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 30 23:21:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 30 23:21:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 30 23:21:07 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 30 23:21:07 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 30 23:21:07 np0005603435 systemd-logind[816]: New session 34 of user zuul.
Jan 30 23:21:07 np0005603435 systemd[1]: Started Session 34 of User zuul.
Jan 30 23:21:07 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 30 23:21:07 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 30 23:21:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 30 23:21:08 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 30 23:21:08 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 30 23:21:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 30 23:21:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 30 23:21:08 np0005603435 python3.9[99179]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:21:08 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 30 23:21:08 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 30 23:21:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 30 23:21:09 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 30 23:21:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 30 23:21:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 30 23:21:09 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 30 23:21:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 119 B/s, 0 objects/s recovering
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 30 23:21:10 np0005603435 python3.9[99397]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:21:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:21:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 30 23:21:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 30 23:21:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:21:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:21:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:21:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 30 23:21:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 30 23:21:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 30 23:21:11 np0005603435 podman[99552]: 2026-01-31 04:21:11.285576891 +0000 UTC m=+0.076801835 container create ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_goldberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:21:11 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 30 23:21:11 np0005603435 systemd[1]: Started libpod-conmon-ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d.scope.
Jan 30 23:21:11 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 30 23:21:11 np0005603435 podman[99552]: 2026-01-31 04:21:11.259700126 +0000 UTC m=+0.050925110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:21:11 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:21:11 np0005603435 podman[99552]: 2026-01-31 04:21:11.377085391 +0000 UTC m=+0.168310365 container init ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_goldberg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:21:11 np0005603435 podman[99552]: 2026-01-31 04:21:11.386431634 +0000 UTC m=+0.177656598 container start ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_goldberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:21:11 np0005603435 podman[99552]: 2026-01-31 04:21:11.390874685 +0000 UTC m=+0.182099619 container attach ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:21:11 np0005603435 flamboyant_goldberg[99568]: 167 167
Jan 30 23:21:11 np0005603435 systemd[1]: libpod-ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d.scope: Deactivated successfully.
Jan 30 23:21:11 np0005603435 podman[99552]: 2026-01-31 04:21:11.394866074 +0000 UTC m=+0.186091038 container died ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_goldberg, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:21:11 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d2bf86f34f212a44a9e5619cb00036b43b6f8a8970e7e440cf19cf7ec016c94a-merged.mount: Deactivated successfully.
Jan 30 23:21:11 np0005603435 podman[99552]: 2026-01-31 04:21:11.444774518 +0000 UTC m=+0.235999452 container remove ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:21:11 np0005603435 systemd[1]: libpod-conmon-ce2cab5c4fda8d882b0af654739c08d88c20b545c302eb87506f6c99923ca47d.scope: Deactivated successfully.
Jan 30 23:21:11 np0005603435 podman[99591]: 2026-01-31 04:21:11.601561696 +0000 UTC m=+0.046904600 container create adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:21:11 np0005603435 systemd[1]: Started libpod-conmon-adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde.scope.
Jan 30 23:21:11 np0005603435 podman[99591]: 2026-01-31 04:21:11.579796393 +0000 UTC m=+0.025139307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:21:11 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:21:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773a7698d3d833dac136a1ba106a7165a55511b4b5c34f0360920c0ca832ccae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773a7698d3d833dac136a1ba106a7165a55511b4b5c34f0360920c0ca832ccae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773a7698d3d833dac136a1ba106a7165a55511b4b5c34f0360920c0ca832ccae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773a7698d3d833dac136a1ba106a7165a55511b4b5c34f0360920c0ca832ccae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773a7698d3d833dac136a1ba106a7165a55511b4b5c34f0360920c0ca832ccae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:11 np0005603435 podman[99591]: 2026-01-31 04:21:11.719123534 +0000 UTC m=+0.164466468 container init adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jang, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:21:11 np0005603435 podman[99591]: 2026-01-31 04:21:11.728507178 +0000 UTC m=+0.173850082 container start adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:21:11 np0005603435 podman[99591]: 2026-01-31 04:21:11.732408745 +0000 UTC m=+0.177751690 container attach adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jang, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:21:12 np0005603435 affectionate_jang[99607]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:21:12 np0005603435 affectionate_jang[99607]: --> All data devices are unavailable
Jan 30 23:21:12 np0005603435 systemd[1]: libpod-adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde.scope: Deactivated successfully.
Jan 30 23:21:12 np0005603435 podman[99591]: 2026-01-31 04:21:12.163679864 +0000 UTC m=+0.609022728 container died adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jang, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:21:12 np0005603435 systemd[1]: var-lib-containers-storage-overlay-773a7698d3d833dac136a1ba106a7165a55511b4b5c34f0360920c0ca832ccae-merged.mount: Deactivated successfully.
Jan 30 23:21:12 np0005603435 podman[99591]: 2026-01-31 04:21:12.246344404 +0000 UTC m=+0.691687268 container remove adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jang, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:21:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 30 23:21:12 np0005603435 systemd[1]: libpod-conmon-adce83360ceb9d350224d2079bfa52d159af89977f7a3ec3e1912e2ee6b0ecde.scope: Deactivated successfully.
Jan 30 23:21:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 30 23:21:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 30 23:21:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 30 23:21:12 np0005603435 podman[99704]: 2026-01-31 04:21:12.747102544 +0000 UTC m=+0.070806506 container create e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:21:12 np0005603435 systemd[1]: Started libpod-conmon-e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4.scope.
Jan 30 23:21:12 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:21:12 np0005603435 podman[99704]: 2026-01-31 04:21:12.72208601 +0000 UTC m=+0.045789982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:21:12 np0005603435 podman[99704]: 2026-01-31 04:21:12.824610115 +0000 UTC m=+0.148314107 container init e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:21:12 np0005603435 podman[99704]: 2026-01-31 04:21:12.833789584 +0000 UTC m=+0.157493566 container start e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:21:12 np0005603435 crazy_wu[99720]: 167 167
Jan 30 23:21:12 np0005603435 podman[99704]: 2026-01-31 04:21:12.839036365 +0000 UTC m=+0.162740357 container attach e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:21:12 np0005603435 systemd[1]: libpod-e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4.scope: Deactivated successfully.
Jan 30 23:21:12 np0005603435 podman[99704]: 2026-01-31 04:21:12.84042936 +0000 UTC m=+0.164133352 container died e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:21:12 np0005603435 systemd[1]: var-lib-containers-storage-overlay-09895756ebfd5d7386cec78cafef4bf1aa6209450fb90c38d697495f79938ae1-merged.mount: Deactivated successfully.
Jan 30 23:21:12 np0005603435 podman[99704]: 2026-01-31 04:21:12.889677947 +0000 UTC m=+0.213381939 container remove e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 30 23:21:12 np0005603435 systemd[1]: libpod-conmon-e653549a1714a6b0e71686c9a74eee5e8fc1de0de58811d4a2800b84b81fbea4.scope: Deactivated successfully.
Jan 30 23:21:13 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 30 23:21:13 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 30 23:21:13 np0005603435 podman[99744]: 2026-01-31 04:21:13.087526858 +0000 UTC m=+0.063228427 container create e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_shannon, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:21:13 np0005603435 systemd[1]: Started libpod-conmon-e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b.scope.
Jan 30 23:21:13 np0005603435 podman[99744]: 2026-01-31 04:21:13.059500379 +0000 UTC m=+0.035201998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:21:13 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:21:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba716cc778698c8d19b0acc59b0e0ae331b84d4e264fa441bd5e34666cf0270/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba716cc778698c8d19b0acc59b0e0ae331b84d4e264fa441bd5e34666cf0270/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba716cc778698c8d19b0acc59b0e0ae331b84d4e264fa441bd5e34666cf0270/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba716cc778698c8d19b0acc59b0e0ae331b84d4e264fa441bd5e34666cf0270/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:13 np0005603435 podman[99744]: 2026-01-31 04:21:13.182025553 +0000 UTC m=+0.157727102 container init e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:21:13 np0005603435 podman[99744]: 2026-01-31 04:21:13.190033933 +0000 UTC m=+0.165735472 container start e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:21:13 np0005603435 podman[99744]: 2026-01-31 04:21:13.193580461 +0000 UTC m=+0.169281990 container attach e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:21:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 30 23:21:13 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 30 23:21:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 30 23:21:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 30 23:21:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]: {
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:    "0": [
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:        {
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "devices": [
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "/dev/loop3"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            ],
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_name": "ceph_lv0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_size": "21470642176",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "name": "ceph_lv0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "tags": {
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cluster_name": "ceph",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.crush_device_class": "",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.encrypted": "0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.objectstore": "bluestore",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osd_id": "0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.type": "block",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.vdo": "0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.with_tpm": "0"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            },
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "type": "block",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "vg_name": "ceph_vg0"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:        }
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:    ],
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:    "1": [
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:        {
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "devices": [
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "/dev/loop4"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            ],
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_name": "ceph_lv1",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_size": "21470642176",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "name": "ceph_lv1",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "tags": {
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cluster_name": "ceph",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.crush_device_class": "",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.encrypted": "0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.objectstore": "bluestore",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osd_id": "1",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.type": "block",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.vdo": "0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.with_tpm": "0"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            },
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "type": "block",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "vg_name": "ceph_vg1"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:        }
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:    ],
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:    "2": [
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:        {
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "devices": [
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "/dev/loop5"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            ],
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_name": "ceph_lv2",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_size": "21470642176",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "name": "ceph_lv2",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "tags": {
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.cluster_name": "ceph",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.crush_device_class": "",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.encrypted": "0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.objectstore": "bluestore",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osd_id": "2",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.type": "block",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.vdo": "0",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:                "ceph.with_tpm": "0"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            },
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "type": "block",
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:            "vg_name": "ceph_vg2"
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:        }
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]:    ]
Jan 30 23:21:13 np0005603435 flamboyant_shannon[99761]: }
Jan 30 23:21:13 np0005603435 systemd[1]: libpod-e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b.scope: Deactivated successfully.
Jan 30 23:21:13 np0005603435 podman[99744]: 2026-01-31 04:21:13.534477547 +0000 UTC m=+0.510179156 container died e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:21:13 np0005603435 systemd[1]: var-lib-containers-storage-overlay-aba716cc778698c8d19b0acc59b0e0ae331b84d4e264fa441bd5e34666cf0270-merged.mount: Deactivated successfully.
Jan 30 23:21:13 np0005603435 podman[99744]: 2026-01-31 04:21:13.5855521 +0000 UTC m=+0.561253659 container remove e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:21:13 np0005603435 systemd[1]: libpod-conmon-e11aa1e8838b4514a0b5076db4b2adf17b90e2fd2cabe72921dbb2425d39719b.scope: Deactivated successfully.
Jan 30 23:21:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 30 23:21:14 np0005603435 podman[99851]: 2026-01-31 04:21:14.025243178 +0000 UTC m=+0.050866839 container create ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:21:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 30 23:21:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:14 np0005603435 systemd[1]: Started libpod-conmon-ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc.scope.
Jan 30 23:21:14 np0005603435 podman[99851]: 2026-01-31 04:21:13.995894476 +0000 UTC m=+0.021518147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:21:14 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:21:14 np0005603435 podman[99851]: 2026-01-31 04:21:14.128722527 +0000 UTC m=+0.154346178 container init ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:21:14 np0005603435 podman[99851]: 2026-01-31 04:21:14.13850227 +0000 UTC m=+0.164125951 container start ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 30 23:21:14 np0005603435 elated_ramanujan[99869]: 167 167
Jan 30 23:21:14 np0005603435 systemd[1]: libpod-ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc.scope: Deactivated successfully.
Jan 30 23:21:14 np0005603435 podman[99851]: 2026-01-31 04:21:14.1448963 +0000 UTC m=+0.170519981 container attach ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:21:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 93 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=93 pruub=9.968972206s) [2] r=-1 lpr=93 pi=[61,93)/1 crt=82'1441 lcod 82'1441 active pruub 150.411621094s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 93 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=93 pruub=9.968657494s) [2] r=-1 lpr=93 pi=[61,93)/1 crt=82'1441 lcod 82'1441 unknown NOTIFY pruub 150.411621094s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:14 np0005603435 podman[99851]: 2026-01-31 04:21:14.147264139 +0000 UTC m=+0.172887820 container died ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:21:14 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=93) [2] r=0 lpr=93 pi=[61,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:14 np0005603435 systemd[1]: var-lib-containers-storage-overlay-37e05bbd9b3a5cb290b85f568e3393e6d9e2650406575f2518fe140b99f99073-merged.mount: Deactivated successfully.
Jan 30 23:21:14 np0005603435 podman[99851]: 2026-01-31 04:21:14.193131582 +0000 UTC m=+0.218755253 container remove ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Jan 30 23:21:14 np0005603435 systemd[1]: libpod-conmon-ee59bbbdd585ac613c50902fc2b3eabd1dafa5035eea679a656a43fc4f5171dc.scope: Deactivated successfully.
Jan 30 23:21:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 30 23:21:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 30 23:21:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 30 23:21:14 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[61,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:14 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[61,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 30 23:21:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 94 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=94) [2]/[0] r=0 lpr=94 pi=[61,94)/1 crt=82'1441 lcod 82'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:14 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 94 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=94) [2]/[0] r=0 lpr=94 pi=[61,94)/1 crt=82'1441 lcod 82'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 120 B/s, 0 objects/s recovering
Jan 30 23:21:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 30 23:21:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 30 23:21:14 np0005603435 podman[99893]: 2026-01-31 04:21:14.382120302 +0000 UTC m=+0.058293264 container create 1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:21:14 np0005603435 systemd[1]: Started libpod-conmon-1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623.scope.
Jan 30 23:21:14 np0005603435 podman[99893]: 2026-01-31 04:21:14.353053857 +0000 UTC m=+0.029226819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:21:14 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:21:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2cac026bb0e8496a837cc421a93c76187643583b1bb1420aa262f3b1da30c3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2cac026bb0e8496a837cc421a93c76187643583b1bb1420aa262f3b1da30c3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2cac026bb0e8496a837cc421a93c76187643583b1bb1420aa262f3b1da30c3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2cac026bb0e8496a837cc421a93c76187643583b1bb1420aa262f3b1da30c3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:21:14 np0005603435 podman[99893]: 2026-01-31 04:21:14.49156943 +0000 UTC m=+0.167742392 container init 1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:21:14 np0005603435 podman[99893]: 2026-01-31 04:21:14.504474311 +0000 UTC m=+0.180647243 container start 1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shtern, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:21:14 np0005603435 podman[99893]: 2026-01-31 04:21:14.50884664 +0000 UTC m=+0.185019612 container attach 1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:21:14 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 30 23:21:14 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 30 23:21:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 30 23:21:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 30 23:21:15 np0005603435 lvm[99988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:21:15 np0005603435 lvm[99989]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:21:15 np0005603435 lvm[99989]: VG ceph_vg1 finished
Jan 30 23:21:15 np0005603435 lvm[99988]: VG ceph_vg0 finished
Jan 30 23:21:15 np0005603435 lvm[99991]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:21:15 np0005603435 lvm[99991]: VG ceph_vg2 finished
Jan 30 23:21:15 np0005603435 elastic_shtern[99910]: {}
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 30 23:21:15 np0005603435 systemd[1]: libpod-1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623.scope: Deactivated successfully.
Jan 30 23:21:15 np0005603435 systemd[1]: libpod-1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623.scope: Consumed 1.182s CPU time.
Jan 30 23:21:15 np0005603435 podman[99893]: 2026-01-31 04:21:15.321530093 +0000 UTC m=+0.997703015 container died 1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:21:15 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d2cac026bb0e8496a837cc421a93c76187643583b1bb1420aa262f3b1da30c3d-merged.mount: Deactivated successfully.
Jan 30 23:21:15 np0005603435 podman[99893]: 2026-01-31 04:21:15.361905599 +0000 UTC m=+1.038078521 container remove 1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_shtern, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:21:15 np0005603435 systemd[1]: libpod-conmon-1c10fd1da3d4b3b7953d29cc3449c4171c96272ef279c91013cd8cc88a4c1623.scope: Deactivated successfully.
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:21:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:21:15 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 95 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=94/95 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[61,94)/1 crt=83'1442 lcod 82'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:21:16 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 30 23:21:16 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 30 23:21:16 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 96 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=94/95 n=7 ec=53/37 lis/c=94/61 les/c/f=95/62/0 sis=96 pruub=15.542501450s) [2] async=[2] r=-1 lpr=96 pi=[61,96)/1 crt=83'1442 lcod 82'1441 active pruub 158.283615112s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:16 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 96 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=94/95 n=7 ec=53/37 lis/c=94/61 les/c/f=95/62/0 sis=96 pruub=15.542350769s) [2] r=-1 lpr=96 pi=[61,96)/1 crt=83'1442 lcod 82'1441 unknown NOTIFY pruub 158.283615112s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:16 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 96 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=96 pruub=15.664396286s) [1] r=-1 lpr=96 pi=[61,96)/1 crt=43'1440 active pruub 158.406890869s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:16 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 96 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=96 pruub=15.664354324s) [1] r=-1 lpr=96 pi=[61,96)/1 crt=43'1440 unknown NOTIFY pruub 158.406890869s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 30 23:21:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 96 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=0/0 n=7 ec=53/37 lis/c=94/61 les/c/f=95/62/0 sis=96) [2] r=0 lpr=96 pi=[61,96)/1 pct=0'0 crt=83'1442 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:16 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 96 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=0/0 n=7 ec=53/37 lis/c=94/61 les/c/f=95/62/0 sis=96) [2] r=0 lpr=96 pi=[61,96)/1 crt=83'1442 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:16 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=96) [1] r=0 lpr=96 pi=[61,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5127788478371656e-06 of space, bias 4.0, pg target 0.0018153346174045988 quantized to 16 (current 16)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:21:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:21:16 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 30 23:21:16 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 30 23:21:17 np0005603435 systemd[1]: session-34.scope: Deactivated successfully.
Jan 30 23:21:17 np0005603435 systemd[1]: session-34.scope: Consumed 8.000s CPU time.
Jan 30 23:21:17 np0005603435 systemd-logind[816]: Session 34 logged out. Waiting for processes to exit.
Jan 30 23:21:17 np0005603435 systemd-logind[816]: Removed session 34.
Jan 30 23:21:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 30 23:21:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 30 23:21:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 30 23:21:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 30 23:21:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 30 23:21:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=97) [1]/[0] r=-1 lpr=97 pi=[61,97)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:17 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=97) [1]/[0] r=-1 lpr=97 pi=[61,97)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:17 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 97 pg[9.13( v 83'1442 (0'0,83'1442] local-lis/les=96/97 n=7 ec=53/37 lis/c=94/61 les/c/f=95/62/0 sis=96) [2] r=0 lpr=96 pi=[61,96)/1 crt=83'1442 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:17 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 97 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=97) [1]/[0] r=0 lpr=97 pi=[61,97)/1 crt=43'1440 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:17 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 97 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=97) [1]/[0] r=0 lpr=97 pi=[61,97)/1 crt=43'1440 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:17 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 30 23:21:17 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 30 23:21:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 0 objects/s recovering
Jan 30 23:21:18 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 30 23:21:18 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 30 23:21:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 30 23:21:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 30 23:21:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 30 23:21:18 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 98 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=97/98 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=97) [1]/[0] async=[1] r=0 lpr=97 pi=[61,97)/1 crt=43'1440 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 30 23:21:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 30 23:21:19 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 99 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=97/61 les/c/f=98/62/0 sis=99) [1] r=0 lpr=99 pi=[61,99)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:19 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 99 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=97/61 les/c/f=98/62/0 sis=99) [1] r=0 lpr=99 pi=[61,99)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:19 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 30 23:21:19 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 99 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=97/98 n=7 ec=53/37 lis/c=97/61 les/c/f=98/62/0 sis=99 pruub=15.652945518s) [1] async=[1] r=-1 lpr=99 pi=[61,99)/1 crt=43'1440 active pruub 161.040908813s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:19 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 99 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=97/98 n=7 ec=53/37 lis/c=97/61 les/c/f=98/62/0 sis=99 pruub=15.652860641s) [1] r=-1 lpr=99 pi=[61,99)/1 crt=43'1440 unknown NOTIFY pruub 161.040908813s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 30 23:21:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 30 23:21:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 30 23:21:20 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 100 pg[9.15( v 43'1440 (0'0,43'1440] local-lis/les=99/100 n=7 ec=53/37 lis/c=97/61 les/c/f=98/62/0 sis=99) [1] r=0 lpr=99 pi=[61,99)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:20 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 30 23:21:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:20 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 30 23:21:21 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 30 23:21:21 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 30 23:21:22 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 30 23:21:22 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 30 23:21:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 424 B/s wr, 24 op/s; 45 B/s, 2 objects/s recovering
Jan 30 23:21:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 30 23:21:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 30 23:21:22 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 30 23:21:22 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 30 23:21:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 30 23:21:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 30 23:21:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 30 23:21:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 30 23:21:23 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 30 23:21:23 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 30 23:21:23 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 30 23:21:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 101 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=13.316464424s) [0] r=-1 lpr=101 pi=[70,101)/1 crt=43'1440 active pruub 152.245864868s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:23 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 101 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=13.316305161s) [0] r=-1 lpr=101 pi=[70,101)/1 crt=43'1440 unknown NOTIFY pruub 152.245864868s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:23 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=101) [0] r=0 lpr=101 pi=[70,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 30 23:21:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 30 23:21:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 102 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=0 lpr=102 pi=[70,102)/1 crt=43'1440 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:24 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 102 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=0 lpr=102 pi=[70,102)/1 crt=43'1440 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 30 23:21:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[70,102)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:24 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] r=-1 lpr=102 pi=[70,102)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:24 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 30 23:21:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 391 B/s wr, 22 op/s; 42 B/s, 1 objects/s recovering
Jan 30 23:21:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 30 23:21:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 30 23:21:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 30 23:21:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 30 23:21:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 30 23:21:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 30 23:21:25 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 103 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=102/103 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=102) [0]/[2] async=[0] r=0 lpr=102 pi=[70,102)/1 crt=43'1440 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:25 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 30 23:21:25 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 30 23:21:25 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 30 23:21:25 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 30 23:21:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 30 23:21:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 30 23:21:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 30 23:21:26 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 104 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=102/103 n=7 ec=53/37 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.950926781s) [0] async=[0] r=-1 lpr=104 pi=[70,104)/1 crt=43'1440 active pruub 156.097320557s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:26 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 104 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=102/103 n=7 ec=53/37 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.950819016s) [0] r=-1 lpr=104 pi=[70,104)/1 crt=43'1440 unknown NOTIFY pruub 156.097320557s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:26 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 104 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:26 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 104 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:26 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 30 23:21:26 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 30 23:21:26 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 30 23:21:26 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 30 23:21:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 30 23:21:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 30 23:21:27 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 30 23:21:27 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 105 pg[9.16( v 43'1440 (0'0,43'1440] local-lis/les=104/105 n=7 ec=53/37 lis/c=102/70 les/c/f=103/71/0 sis=104) [0] r=0 lpr=104 pi=[70,104)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Jan 30 23:21:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 30 23:21:30 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 30 23:21:30 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 30 23:21:31 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 30 23:21:31 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 30 23:21:32 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 30 23:21:32 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 30 23:21:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Jan 30 23:21:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 30 23:21:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 30 23:21:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 30 23:21:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 30 23:21:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 30 23:21:32 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 30 23:21:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 30 23:21:32 np0005603435 systemd-logind[816]: New session 35 of user zuul.
Jan 30 23:21:32 np0005603435 systemd[1]: Started Session 35 of User zuul.
Jan 30 23:21:32 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 30 23:21:32 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 30 23:21:32 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 30 23:21:32 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 30 23:21:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 30 23:21:33 np0005603435 python3.9[100217]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 30 23:21:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:34 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 30 23:21:34 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 30 23:21:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 30 23:21:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 30 23:21:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 30 23:21:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 30 23:21:34 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 30 23:21:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 30 23:21:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 30 23:21:34 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 30 23:21:34 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 107 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=107 pruub=13.677873611s) [2] r=-1 lpr=107 pi=[61,107)/1 crt=83'1443 lcod 83'1443 active pruub 174.411468506s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:34 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 107 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=107 pruub=13.677791595s) [2] r=-1 lpr=107 pi=[61,107)/1 crt=83'1443 lcod 83'1443 unknown NOTIFY pruub 174.411468506s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:34 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=107) [2] r=0 lpr=107 pi=[61,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:34 np0005603435 python3.9[100391]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:21:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 30 23:21:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 30 23:21:35 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 30 23:21:35 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 30 23:21:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 30 23:21:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 30 23:21:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 30 23:21:35 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 30 23:21:35 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 108 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=108) [2]/[0] r=0 lpr=108 pi=[61,108)/1 crt=83'1443 lcod 83'1443 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:35 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 108 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=61/62 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=108) [2]/[0] r=0 lpr=108 pi=[61,108)/1 crt=83'1443 lcod 83'1443 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:35 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[61,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:35 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[61,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:35 np0005603435 python3.9[100547]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:21:36 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 30 23:21:36 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 30 23:21:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 30 23:21:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 30 23:21:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 30 23:21:36 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 109 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=108/109 n=7 ec=53/37 lis/c=61/61 les/c/f=62/62/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[61,108)/1 crt=84'1444 lcod 83'1443 mlcod 0'0 active+remapped mbc={255={(0+1)=13}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:36 np0005603435 python3.9[100700]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:21:36 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 30 23:21:36 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 30 23:21:36 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 30 23:21:36 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 30 23:21:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:21:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:21:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:21:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:21:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:21:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:21:37 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 30 23:21:37 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 30 23:21:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 30 23:21:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 30 23:21:37 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 30 23:21:37 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 110 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=108/109 n=7 ec=53/37 lis/c=108/61 les/c/f=109/62/0 sis=110 pruub=14.983416557s) [2] async=[2] r=-1 lpr=110 pi=[61,110)/1 crt=84'1444 lcod 83'1443 active pruub 178.778549194s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:37 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 110 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=108/109 n=7 ec=53/37 lis/c=108/61 les/c/f=109/62/0 sis=110 pruub=14.983267784s) [2] r=-1 lpr=110 pi=[61,110)/1 crt=84'1444 lcod 83'1443 unknown NOTIFY pruub 178.778549194s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:37 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 110 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=0/0 n=7 ec=53/37 lis/c=108/61 les/c/f=109/62/0 sis=110) [2] r=0 lpr=110 pi=[61,110)/1 pct=0'0 crt=84'1444 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:37 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 110 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=0/0 n=7 ec=53/37 lis/c=108/61 les/c/f=109/62/0 sis=110) [2] r=0 lpr=110 pi=[61,110)/1 crt=84'1444 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:37 np0005603435 python3.9[100854]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:21:38 np0005603435 python3.9[101006]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:21:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 30 23:21:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 30 23:21:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 30 23:21:38 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 111 pg[9.19( v 84'1444 (0'0,84'1444] local-lis/les=110/111 n=7 ec=53/37 lis/c=108/61 les/c/f=109/62/0 sis=110) [2] r=0 lpr=110 pi=[61,110)/1 crt=84'1444 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:38 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 30 23:21:38 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 30 23:21:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:39 np0005603435 python3.9[101156]: ansible-ansible.builtin.service_facts Invoked
Jan 30 23:21:39 np0005603435 network[101173]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:21:39 np0005603435 network[101174]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:21:39 np0005603435 network[101175]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:21:39 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 30 23:21:39 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 30 23:21:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 419 B/s wr, 27 op/s; 170 B/s, 5 objects/s recovering
Jan 30 23:21:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 30 23:21:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 30 23:21:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 30 23:21:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 30 23:21:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 30 23:21:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 30 23:21:40 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 30 23:21:40 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 30 23:21:40 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 30 23:21:41 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 30 23:21:41 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 30 23:21:41 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 30 23:21:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 350 B/s wr, 23 op/s; 142 B/s, 4 objects/s recovering
Jan 30 23:21:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 30 23:21:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 30 23:21:42 np0005603435 python3.9[101435]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:21:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 30 23:21:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 30 23:21:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 30 23:21:42 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 30 23:21:42 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 30 23:21:43 np0005603435 python3.9[101585]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:21:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 30 23:21:43 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 30 23:21:43 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 30 23:21:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:44 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 30 23:21:44 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 30 23:21:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 341 B/s wr, 22 op/s; 138 B/s, 4 objects/s recovering
Jan 30 23:21:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 30 23:21:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 30 23:21:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 30 23:21:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 30 23:21:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 30 23:21:44 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 114 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=83/84 n=7 ec=53/37 lis/c=83/83 les/c/f=84/84/0 sis=114 pruub=13.839522362s) [0] r=-1 lpr=114 pi=[83,114)/1 crt=84'1443 lcod 84'1443 active pruub 173.412292480s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:44 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 114 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=83/84 n=7 ec=53/37 lis/c=83/83 les/c/f=84/84/0 sis=114 pruub=13.839444160s) [0] r=-1 lpr=114 pi=[83,114)/1 crt=84'1443 lcod 84'1443 unknown NOTIFY pruub 173.412292480s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:44 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 30 23:21:44 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 114 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=83/83 les/c/f=84/84/0 sis=114) [0] r=0 lpr=114 pi=[83,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:44 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 30 23:21:44 np0005603435 python3.9[101739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:21:44 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 30 23:21:44 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 30 23:21:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 30 23:21:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 30 23:21:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 30 23:21:45 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=83/83 les/c/f=84/84/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[83,115)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:45 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=83/83 les/c/f=84/84/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[83,115)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:45 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 115 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=83/84 n=7 ec=53/37 lis/c=83/83 les/c/f=84/84/0 sis=115) [0]/[2] r=0 lpr=115 pi=[83,115)/1 crt=84'1443 lcod 84'1443 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:45 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 115 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=83/84 n=7 ec=53/37 lis/c=83/83 les/c/f=84/84/0 sis=115) [0]/[2] r=0 lpr=115 pi=[83,115)/1 crt=84'1443 lcod 84'1443 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:45 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 30 23:21:45 np0005603435 python3.9[101897]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:21:45 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 30 23:21:45 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 30 23:21:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 30 23:21:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 30 23:21:46 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 30 23:21:46 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 30 23:21:46 np0005603435 python3.9[101981]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:21:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 30 23:21:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 30 23:21:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 30 23:21:46 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 30 23:21:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 30 23:21:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 30 23:21:46 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 30 23:21:46 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 30 23:21:47 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 116 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=115/116 n=7 ec=53/37 lis/c=83/83 les/c/f=84/84/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[83,115)/1 crt=84'1444 lcod 84'1443 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 30 23:21:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 30 23:21:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 30 23:21:47 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 117 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=115/116 n=7 ec=53/37 lis/c=115/83 les/c/f=116/84/0 sis=117 pruub=15.778440475s) [0] async=[0] r=-1 lpr=117 pi=[83,117)/1 crt=84'1444 lcod 84'1443 active pruub 178.403686523s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:47 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 117 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=115/116 n=7 ec=53/37 lis/c=115/83 les/c/f=116/84/0 sis=117 pruub=15.778329849s) [0] r=-1 lpr=117 pi=[83,117)/1 crt=84'1444 lcod 84'1443 unknown NOTIFY pruub 178.403686523s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:47 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 117 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=0/0 n=7 ec=53/37 lis/c=115/83 les/c/f=116/84/0 sis=117) [0] r=0 lpr=117 pi=[83,117)/1 pct=0'0 crt=84'1444 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:47 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 117 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=0/0 n=7 ec=53/37 lis/c=115/83 les/c/f=116/84/0 sis=117) [0] r=0 lpr=117 pi=[83,117)/1 crt=84'1444 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:47 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 30 23:21:47 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 30 23:21:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 30 23:21:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 30 23:21:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 30 23:21:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 30 23:21:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 30 23:21:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 30 23:21:48 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 30 23:21:48 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 118 pg[9.1c( v 84'1444 (0'0,84'1444] local-lis/les=117/118 n=7 ec=53/37 lis/c=115/83 les/c/f=116/84/0 sis=117) [0] r=0 lpr=117 pi=[83,117)/1 crt=84'1444 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:48 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 30 23:21:48 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 118 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=118 pruub=12.587255478s) [0] r=-1 lpr=118 pi=[70,118)/1 crt=82'1441 lcod 82'1441 active pruub 176.246368408s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:48 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 118 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=118 pruub=12.587203979s) [0] r=-1 lpr=118 pi=[70,118)/1 crt=82'1441 lcod 82'1441 unknown NOTIFY pruub 176.246368408s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:48 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 118 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=118) [0] r=0 lpr=118 pi=[70,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:48 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 30 23:21:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 30 23:21:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 30 23:21:49 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 30 23:21:49 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[70,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:49 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[70,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:49 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 119 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=119) [0]/[2] r=0 lpr=119 pi=[70,119)/1 crt=82'1441 lcod 82'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:49 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 119 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=70/71 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=119) [0]/[2] r=0 lpr=119 pi=[70,119)/1 crt=82'1441 lcod 82'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:49 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 30 23:21:49 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 30 23:21:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 30 23:21:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 30 23:21:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 30 23:21:50 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 30 23:21:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:21:50 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 120 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=119/120 n=7 ec=53/37 lis/c=70/70 les/c/f=71/71/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[70,119)/1 crt=83'1442 lcod 82'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:50 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 30 23:21:50 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 30 23:21:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 30 23:21:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 30 23:21:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 30 23:21:51 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 121 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=119/120 n=7 ec=53/37 lis/c=119/70 les/c/f=120/71/0 sis=121 pruub=15.262299538s) [0] async=[0] r=-1 lpr=121 pi=[70,121)/1 crt=83'1442 lcod 82'1441 active pruub 181.373977661s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:51 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 121 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=119/120 n=7 ec=53/37 lis/c=119/70 les/c/f=120/71/0 sis=121 pruub=15.262155533s) [0] r=-1 lpr=121 pi=[70,121)/1 crt=83'1442 lcod 82'1441 unknown NOTIFY pruub 181.373977661s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:51 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 121 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=0/0 n=7 ec=53/37 lis/c=119/70 les/c/f=120/71/0 sis=121) [0] r=0 lpr=121 pi=[70,121)/1 pct=0'0 crt=83'1442 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:51 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 121 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=0/0 n=7 ec=53/37 lis/c=119/70 les/c/f=120/71/0 sis=121) [0] r=0 lpr=121 pi=[70,121)/1 crt=83'1442 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:51 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 30 23:21:51 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 30 23:21:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 30 23:21:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 30 23:21:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 30 23:21:52 np0005603435 ceph-osd[85822]: osd.0 pg_epoch: 122 pg[9.1e( v 83'1442 (0'0,83'1442] local-lis/les=121/122 n=7 ec=53/37 lis/c=119/70 les/c/f=120/71/0 sis=121) [0] r=0 lpr=121 pi=[70,121)/1 crt=83'1442 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:21:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 154 B/s, 4 objects/s recovering
Jan 30 23:21:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 3 objects/s recovering
Jan 30 23:21:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 30 23:21:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 30 23:21:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 2 objects/s recovering
Jan 30 23:21:56 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 30 23:21:56 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 30 23:21:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 1 objects/s recovering
Jan 30 23:21:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 30 23:21:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:21:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 30 23:21:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 30 23:21:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:21:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 30 23:21:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 30 23:21:58 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 123 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=72/73 n=7 ec=53/37 lis/c=72/72 les/c/f=73/73/0 sis=123 pruub=12.835497856s) [1] r=-1 lpr=123 pi=[72,123)/1 crt=43'1440 active pruub 186.247772217s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:58 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 123 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=72/73 n=7 ec=53/37 lis/c=72/72 les/c/f=73/73/0 sis=123 pruub=12.835448265s) [1] r=-1 lpr=123 pi=[72,123)/1 crt=43'1440 unknown NOTIFY pruub 186.247772217s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:58 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=72/72 les/c/f=73/73/0 sis=123) [1] r=0 lpr=123 pi=[72,123)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:58 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 30 23:21:58 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 30 23:21:58 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 30 23:21:58 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 30 23:21:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:21:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 30 23:21:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 30 23:21:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 30 23:21:59 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 124 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=72/73 n=7 ec=53/37 lis/c=72/72 les/c/f=73/73/0 sis=124) [1]/[2] r=0 lpr=124 pi=[72,124)/1 crt=43'1440 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:59 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 124 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=72/73 n=7 ec=53/37 lis/c=72/72 les/c/f=73/73/0 sis=124) [1]/[2] r=0 lpr=124 pi=[72,124)/1 crt=43'1440 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 30 23:21:59 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=72/72 les/c/f=73/73/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[72,124)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:21:59 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/37 lis/c=72/72 les/c/f=73/73/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[72,124)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 30 23:21:59 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 30 23:21:59 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 30 23:21:59 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 30 23:22:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 30 23:22:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 30 23:22:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 30 23:22:00 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 125 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=124/125 n=7 ec=53/37 lis/c=72/72 les/c/f=73/73/0 sis=124) [1]/[2] async=[1] r=0 lpr=124 pi=[72,124)/1 crt=43'1440 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:22:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:00 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 30 23:22:00 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 30 23:22:00 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 30 23:22:00 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 30 23:22:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 30 23:22:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 30 23:22:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 30 23:22:01 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 126 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=124/125 n=7 ec=53/37 lis/c=124/72 les/c/f=125/73/0 sis=126 pruub=14.993059158s) [1] async=[1] r=-1 lpr=126 pi=[72,126)/1 crt=43'1440 active pruub 191.101913452s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:22:01 np0005603435 ceph-osd[87920]: osd.2 pg_epoch: 126 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=124/125 n=7 ec=53/37 lis/c=124/72 les/c/f=125/73/0 sis=126 pruub=14.992961884s) [1] r=-1 lpr=126 pi=[72,126)/1 crt=43'1440 unknown NOTIFY pruub 191.101913452s@ mbc={}] state<Start>: transitioning to Stray
Jan 30 23:22:01 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 126 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=124/72 les/c/f=125/73/0 sis=126) [1] r=0 lpr=126 pi=[72,126)/1 pct=0'0 crt=43'1440 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 30 23:22:01 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 126 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=0/0 n=7 ec=53/37 lis/c=124/72 les/c/f=125/73/0 sis=126) [1] r=0 lpr=126 pi=[72,126)/1 crt=43'1440 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 30 23:22:01 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 30 23:22:01 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 30 23:22:01 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 30 23:22:01 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 30 23:22:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 30 23:22:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 30 23:22:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 30 23:22:02 np0005603435 ceph-osd[86873]: osd.1 pg_epoch: 127 pg[9.1f( v 43'1440 (0'0,43'1440] local-lis/les=126/127 n=7 ec=53/37 lis/c=124/72 les/c/f=125/73/0 sis=126) [1] r=0 lpr=126 pi=[72,126)/1 crt=43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 30 23:22:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Jan 30 23:22:02 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 30 23:22:02 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 30 23:22:02 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 30 23:22:02 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 30 23:22:02 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 30 23:22:02 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 30 23:22:03 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 30 23:22:03 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 30 23:22:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Jan 30 23:22:05 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 30 23:22:05 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:22:06
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta']
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Jan 30 23:22:06 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 30 23:22:06 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:22:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:22:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 30 23:22:08 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 30 23:22:08 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 30 23:22:08 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 30 23:22:08 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 30 23:22:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:09 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 30 23:22:09 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 30 23:22:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Jan 30 23:22:10 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 30 23:22:10 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 30 23:22:11 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 30 23:22:11 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 30 23:22:11 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 30 23:22:11 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 30 23:22:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 30 23:22:13 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 30 23:22:13 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 30 23:22:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 30 23:22:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 30 23:22:15 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 30 23:22:15 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:16 np0005603435 podman[102275]: 2026-01-31 04:22:16.370170212 +0000 UTC m=+0.050773437 container create c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:22:16 np0005603435 systemd[1]: Started libpod-conmon-c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e.scope.
Jan 30 23:22:16 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:22:16 np0005603435 podman[102275]: 2026-01-31 04:22:16.343412366 +0000 UTC m=+0.024015631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:22:16 np0005603435 podman[102275]: 2026-01-31 04:22:16.440574989 +0000 UTC m=+0.121178184 container init c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:22:16 np0005603435 podman[102275]: 2026-01-31 04:22:16.444948606 +0000 UTC m=+0.125551781 container start c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:22:16 np0005603435 podman[102275]: 2026-01-31 04:22:16.447880258 +0000 UTC m=+0.128483433 container attach c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:22:16 np0005603435 ecstatic_keldysh[102292]: 167 167
Jan 30 23:22:16 np0005603435 systemd[1]: libpod-c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e.scope: Deactivated successfully.
Jan 30 23:22:16 np0005603435 conmon[102292]: conmon c21027b12b93a189eaf1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e.scope/container/memory.events
Jan 30 23:22:16 np0005603435 podman[102275]: 2026-01-31 04:22:16.451348163 +0000 UTC m=+0.131951338 container died c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:22:16 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:22:16 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 30 23:22:16 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 30 23:22:16 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d15a3399c64e51c03e0018935755fa1cf97e2fe090cc9e39ce08f473ca00658a-merged.mount: Deactivated successfully.
Jan 30 23:22:16 np0005603435 podman[102275]: 2026-01-31 04:22:16.500608981 +0000 UTC m=+0.181212156 container remove c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:22:16 np0005603435 systemd[1]: libpod-conmon-c21027b12b93a189eaf1c3501e00869717093fa81c14bf7760daa2d0a11c533e.scope: Deactivated successfully.
Jan 30 23:22:16 np0005603435 podman[102316]: 2026-01-31 04:22:16.637029877 +0000 UTC m=+0.058268030 container create 4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_rubin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:22:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:22:16 np0005603435 systemd[1]: Started libpod-conmon-4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c.scope.
Jan 30 23:22:16 np0005603435 podman[102316]: 2026-01-31 04:22:16.611330737 +0000 UTC m=+0.032568950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:22:16 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:22:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c155fac379594297db940340c540fd6fba7136d71b7d32deddb31f8d71a731/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c155fac379594297db940340c540fd6fba7136d71b7d32deddb31f8d71a731/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c155fac379594297db940340c540fd6fba7136d71b7d32deddb31f8d71a731/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c155fac379594297db940340c540fd6fba7136d71b7d32deddb31f8d71a731/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c155fac379594297db940340c540fd6fba7136d71b7d32deddb31f8d71a731/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:16 np0005603435 podman[102316]: 2026-01-31 04:22:16.76312697 +0000 UTC m=+0.184365183 container init 4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:22:16 np0005603435 podman[102316]: 2026-01-31 04:22:16.769306252 +0000 UTC m=+0.190544405 container start 4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:22:16 np0005603435 podman[102316]: 2026-01-31 04:22:16.773152146 +0000 UTC m=+0.194390319 container attach 4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:22:17 np0005603435 elated_rubin[102333]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:22:17 np0005603435 elated_rubin[102333]: --> All data devices are unavailable
Jan 30 23:22:17 np0005603435 systemd[1]: libpod-4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c.scope: Deactivated successfully.
Jan 30 23:22:17 np0005603435 podman[102316]: 2026-01-31 04:22:17.207031948 +0000 UTC m=+0.628270071 container died 4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:22:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay-87c155fac379594297db940340c540fd6fba7136d71b7d32deddb31f8d71a731-merged.mount: Deactivated successfully.
Jan 30 23:22:17 np0005603435 podman[102316]: 2026-01-31 04:22:17.247368657 +0000 UTC m=+0.668606800 container remove 4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_rubin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:22:17 np0005603435 systemd[1]: libpod-conmon-4865d550640a26a881e77783f46ccff73af2e062854d9c9bf764afa1c149169c.scope: Deactivated successfully.
Jan 30 23:22:17 np0005603435 podman[102428]: 2026-01-31 04:22:17.667308007 +0000 UTC m=+0.039809487 container create dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:22:17 np0005603435 systemd[1]: Started libpod-conmon-dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2.scope.
Jan 30 23:22:17 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:22:17 np0005603435 podman[102428]: 2026-01-31 04:22:17.725557946 +0000 UTC m=+0.098059436 container init dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_napier, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 30 23:22:17 np0005603435 podman[102428]: 2026-01-31 04:22:17.732400654 +0000 UTC m=+0.104902134 container start dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:22:17 np0005603435 podman[102428]: 2026-01-31 04:22:17.736555186 +0000 UTC m=+0.109056706 container attach dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_napier, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:22:17 np0005603435 podman[102428]: 2026-01-31 04:22:17.64785605 +0000 UTC m=+0.020357560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:22:17 np0005603435 vigilant_napier[102445]: 167 167
Jan 30 23:22:17 np0005603435 systemd[1]: libpod-dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2.scope: Deactivated successfully.
Jan 30 23:22:17 np0005603435 podman[102428]: 2026-01-31 04:22:17.752178009 +0000 UTC m=+0.124679489 container died dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_napier, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:22:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay-347b2ceb2cadfa43ec00e04f7c31c5627efc161add68cf237de4579cdc560a68-merged.mount: Deactivated successfully.
Jan 30 23:22:17 np0005603435 podman[102428]: 2026-01-31 04:22:17.789346911 +0000 UTC m=+0.161848391 container remove dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:22:17 np0005603435 systemd[1]: libpod-conmon-dc8214f95bd601a25795be4942b766451b23d14d1885b8699a56c636d84a17d2.scope: Deactivated successfully.
Jan 30 23:22:17 np0005603435 podman[102469]: 2026-01-31 04:22:17.98504013 +0000 UTC m=+0.053264727 container create b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_clarke, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:22:18 np0005603435 systemd[1]: Started libpod-conmon-b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2.scope.
Jan 30 23:22:18 np0005603435 podman[102469]: 2026-01-31 04:22:17.956172512 +0000 UTC m=+0.024397159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:22:18 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:22:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ec571295e2ff87a882ae2de5953d593fb34b9c2858f1f4c91adbf8dc7b574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ec571295e2ff87a882ae2de5953d593fb34b9c2858f1f4c91adbf8dc7b574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ec571295e2ff87a882ae2de5953d593fb34b9c2858f1f4c91adbf8dc7b574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ec571295e2ff87a882ae2de5953d593fb34b9c2858f1f4c91adbf8dc7b574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:18 np0005603435 podman[102469]: 2026-01-31 04:22:18.100911012 +0000 UTC m=+0.169135639 container init b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:22:18 np0005603435 podman[102469]: 2026-01-31 04:22:18.109696338 +0000 UTC m=+0.177920945 container start b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 30 23:22:18 np0005603435 podman[102469]: 2026-01-31 04:22:18.114127637 +0000 UTC m=+0.182352294 container attach b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_clarke, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 30 23:22:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]: {
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:    "0": [
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:        {
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "devices": [
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "/dev/loop3"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            ],
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_name": "ceph_lv0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_size": "21470642176",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "name": "ceph_lv0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "tags": {
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cluster_name": "ceph",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.crush_device_class": "",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.encrypted": "0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.objectstore": "bluestore",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osd_id": "0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.type": "block",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.vdo": "0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.with_tpm": "0"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            },
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "type": "block",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "vg_name": "ceph_vg0"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:        }
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:    ],
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:    "1": [
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:        {
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "devices": [
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "/dev/loop4"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            ],
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_name": "ceph_lv1",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_size": "21470642176",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "name": "ceph_lv1",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "tags": {
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cluster_name": "ceph",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.crush_device_class": "",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.encrypted": "0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.objectstore": "bluestore",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osd_id": "1",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.type": "block",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.vdo": "0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.with_tpm": "0"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            },
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "type": "block",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "vg_name": "ceph_vg1"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:        }
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:    ],
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:    "2": [
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:        {
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "devices": [
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "/dev/loop5"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            ],
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_name": "ceph_lv2",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_size": "21470642176",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "name": "ceph_lv2",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "tags": {
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.cluster_name": "ceph",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.crush_device_class": "",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.encrypted": "0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.objectstore": "bluestore",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osd_id": "2",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.type": "block",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.vdo": "0",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:                "ceph.with_tpm": "0"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            },
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "type": "block",
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:            "vg_name": "ceph_vg2"
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:        }
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]:    ]
Jan 30 23:22:18 np0005603435 gallant_clarke[102486]: }
Jan 30 23:22:18 np0005603435 systemd[1]: libpod-b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2.scope: Deactivated successfully.
Jan 30 23:22:18 np0005603435 podman[102469]: 2026-01-31 04:22:18.456848173 +0000 UTC m=+0.525072790 container died b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:22:18 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ac8ec571295e2ff87a882ae2de5953d593fb34b9c2858f1f4c91adbf8dc7b574-merged.mount: Deactivated successfully.
Jan 30 23:22:18 np0005603435 podman[102469]: 2026-01-31 04:22:18.515471361 +0000 UTC m=+0.583695978 container remove b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:22:18 np0005603435 systemd[1]: libpod-conmon-b421875d835304ae5ecf6b0c57b09239ea2d994c3f1985b23a5cb77fabb47fa2.scope: Deactivated successfully.
Jan 30 23:22:19 np0005603435 podman[102571]: 2026-01-31 04:22:19.055782933 +0000 UTC m=+0.056092727 container create 5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:22:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:19 np0005603435 systemd[1]: Started libpod-conmon-5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f.scope.
Jan 30 23:22:19 np0005603435 podman[102571]: 2026-01-31 04:22:19.025814928 +0000 UTC m=+0.026124782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:22:19 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:22:19 np0005603435 podman[102571]: 2026-01-31 04:22:19.145118584 +0000 UTC m=+0.145428438 container init 5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_banzai, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:22:19 np0005603435 podman[102571]: 2026-01-31 04:22:19.151260515 +0000 UTC m=+0.151570299 container start 5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:22:19 np0005603435 laughing_banzai[102589]: 167 167
Jan 30 23:22:19 np0005603435 podman[102571]: 2026-01-31 04:22:19.155342605 +0000 UTC m=+0.155652439 container attach 5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_banzai, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:22:19 np0005603435 systemd[1]: libpod-5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f.scope: Deactivated successfully.
Jan 30 23:22:19 np0005603435 podman[102571]: 2026-01-31 04:22:19.156928734 +0000 UTC m=+0.157238528 container died 5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:22:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-cc497586e0c399033b0c4400844d5adf69aba465be41ea1e13aa40b1f3a41c87-merged.mount: Deactivated successfully.
Jan 30 23:22:19 np0005603435 podman[102571]: 2026-01-31 04:22:19.200514503 +0000 UTC m=+0.200824257 container remove 5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_banzai, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:22:19 np0005603435 systemd[1]: libpod-conmon-5d2fbc58596a09cc85083f3e80ab967167674237bf6e5828af29f4ffea34e22f.scope: Deactivated successfully.
Jan 30 23:22:19 np0005603435 podman[102613]: 2026-01-31 04:22:19.343730946 +0000 UTC m=+0.042725659 container create 4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:22:19 np0005603435 systemd[1]: Started libpod-conmon-4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30.scope.
Jan 30 23:22:19 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:22:19 np0005603435 podman[102613]: 2026-01-31 04:22:19.322974447 +0000 UTC m=+0.021969180 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:22:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7775ca6d5de2da65d0362e7cfbc568763bca3a791f6b8bbffa5f9a8f0e315dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7775ca6d5de2da65d0362e7cfbc568763bca3a791f6b8bbffa5f9a8f0e315dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7775ca6d5de2da65d0362e7cfbc568763bca3a791f6b8bbffa5f9a8f0e315dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7775ca6d5de2da65d0362e7cfbc568763bca3a791f6b8bbffa5f9a8f0e315dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:22:19 np0005603435 podman[102613]: 2026-01-31 04:22:19.44258074 +0000 UTC m=+0.141575513 container init 4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 30 23:22:19 np0005603435 podman[102613]: 2026-01-31 04:22:19.451771946 +0000 UTC m=+0.150766689 container start 4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:22:19 np0005603435 podman[102613]: 2026-01-31 04:22:19.459850994 +0000 UTC m=+0.158845787 container attach 4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:22:20 np0005603435 lvm[102706]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:22:20 np0005603435 lvm[102706]: VG ceph_vg0 finished
Jan 30 23:22:20 np0005603435 lvm[102709]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:22:20 np0005603435 lvm[102709]: VG ceph_vg1 finished
Jan 30 23:22:20 np0005603435 lvm[102711]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:22:20 np0005603435 lvm[102711]: VG ceph_vg2 finished
Jan 30 23:22:20 np0005603435 lvm[102712]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:22:20 np0005603435 lvm[102712]: VG ceph_vg0 finished
Jan 30 23:22:20 np0005603435 youthful_hawking[102630]: {}
Jan 30 23:22:20 np0005603435 systemd[1]: libpod-4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30.scope: Deactivated successfully.
Jan 30 23:22:20 np0005603435 systemd[1]: libpod-4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30.scope: Consumed 1.292s CPU time.
Jan 30 23:22:20 np0005603435 podman[102613]: 2026-01-31 04:22:20.300630485 +0000 UTC m=+0.999625178 container died 4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:22:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f7775ca6d5de2da65d0362e7cfbc568763bca3a791f6b8bbffa5f9a8f0e315dc-merged.mount: Deactivated successfully.
Jan 30 23:22:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:20 np0005603435 podman[102613]: 2026-01-31 04:22:20.343096277 +0000 UTC m=+1.042091010 container remove 4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:22:20 np0005603435 systemd[1]: libpod-conmon-4ae473524be4bc23711ebe7ab8d1e9f2095901f8ec7bc9b1b6168f64bbb42c30.scope: Deactivated successfully.
Jan 30 23:22:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:22:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:22:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:22:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:22:21 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:22:21 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:22:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:22 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 30 23:22:22 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 30 23:22:23 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 30 23:22:23 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 30 23:22:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:24 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 30 23:22:24 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 30 23:22:24 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 30 23:22:24 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 30 23:22:25 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 30 23:22:25 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 30 23:22:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:26 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 30 23:22:26 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 30 23:22:27 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 30 23:22:27 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 30 23:22:27 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 30 23:22:27 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 30 23:22:28 np0005603435 python3.9[102902]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:22:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:28 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 30 23:22:28 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 30 23:22:28 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 30 23:22:28 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 30 23:22:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:29 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 30 23:22:29 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 30 23:22:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:30 np0005603435 python3.9[103189]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 30 23:22:30 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 30 23:22:30 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 30 23:22:31 np0005603435 python3.9[103341]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 30 23:22:31 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 30 23:22:31 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 30 23:22:31 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 30 23:22:31 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 30 23:22:31 np0005603435 python3.9[103493]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:22:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:32 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 30 23:22:32 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 30 23:22:32 np0005603435 python3.9[103645]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 30 23:22:33 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 30 23:22:33 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 30 23:22:33 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 30 23:22:33 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 30 23:22:34 np0005603435 python3.9[103797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:22:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:34 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 30 23:22:34 np0005603435 python3.9[103949]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:22:34 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 30 23:22:35 np0005603435 python3.9[104027]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:22:36 np0005603435 python3.9[104179]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:22:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:22:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:22:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:22:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:22:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:22:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:22:37 np0005603435 python3.9[104333]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 30 23:22:37 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 30 23:22:37 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 30 23:22:37 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 30 23:22:37 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 30 23:22:37 np0005603435 python3.9[104486]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 30 23:22:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:38 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 30 23:22:38 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 30 23:22:39 np0005603435 python3.9[104639]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 30 23:22:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:39 np0005603435 python3.9[104791]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 30 23:22:39 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 30 23:22:39 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 30 23:22:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:40 np0005603435 python3.9[104943]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:22:40 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 30 23:22:40 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 30 23:22:41 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 30 23:22:41 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 30 23:22:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:42 np0005603435 python3.9[105096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:22:42 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 30 23:22:42 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 30 23:22:43 np0005603435 python3.9[105248]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:22:43 np0005603435 python3.9[105326]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:22:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:44 np0005603435 python3.9[105478]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:22:44 np0005603435 python3.9[105556]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:22:45 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 30 23:22:45 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 30 23:22:45 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 30 23:22:45 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 30 23:22:45 np0005603435 python3.9[105709]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:22:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:46 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 30 23:22:46 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 30 23:22:47 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 30 23:22:47 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 30 23:22:47 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 30 23:22:47 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 30 23:22:47 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 30 23:22:47 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 30 23:22:47 np0005603435 python3.9[105860]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:22:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:48 np0005603435 python3.9[106012]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 30 23:22:48 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 30 23:22:48 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 30 23:22:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:49 np0005603435 python3.9[106162]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:22:49 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 30 23:22:49 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 30 23:22:49 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 30 23:22:49 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 30 23:22:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:50 np0005603435 python3.9[106314]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:22:50 np0005603435 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 30 23:22:50 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 30 23:22:50 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 30 23:22:50 np0005603435 systemd[1]: tuned.service: Deactivated successfully.
Jan 30 23:22:50 np0005603435 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 30 23:22:50 np0005603435 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 30 23:22:51 np0005603435 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 30 23:22:51 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 30 23:22:51 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 30 23:22:51 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 30 23:22:51 np0005603435 python3.9[106476]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 30 23:22:51 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 30 23:22:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:52 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 30 23:22:52 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 30 23:22:52 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 30 23:22:52 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 30 23:22:53 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 30 23:22:53 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 30 23:22:54 np0005603435 python3.9[106628]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:22:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 30 23:22:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 30 23:22:54 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 30 23:22:54 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 30 23:22:54 np0005603435 python3.9[106782]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:22:55 np0005603435 systemd[1]: session-35.scope: Deactivated successfully.
Jan 30 23:22:55 np0005603435 systemd[1]: session-35.scope: Consumed 1min 3.895s CPU time.
Jan 30 23:22:55 np0005603435 systemd-logind[816]: Session 35 logged out. Waiting for processes to exit.
Jan 30 23:22:55 np0005603435 systemd-logind[816]: Removed session 35.
Jan 30 23:22:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:56 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 30 23:22:56 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 30 23:22:57 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 30 23:22:57 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 30 23:22:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:22:58 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 30 23:22:58 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 30 23:22:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:22:59 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 30 23:22:59 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 30 23:23:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:00 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 30 23:23:00 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 30 23:23:00 np0005603435 systemd-logind[816]: New session 36 of user zuul.
Jan 30 23:23:00 np0005603435 systemd[1]: Started Session 36 of User zuul.
Jan 30 23:23:01 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 30 23:23:01 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 30 23:23:01 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 30 23:23:01 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 30 23:23:01 np0005603435 python3.9[106962]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:23:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:02 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 30 23:23:02 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 30 23:23:02 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 30 23:23:02 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 30 23:23:03 np0005603435 python3.9[107118]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 30 23:23:03 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 30 23:23:03 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 30 23:23:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:04 np0005603435 python3.9[107271]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:23:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:05 np0005603435 python3.9[107355]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 30 23:23:05 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 30 23:23:05 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 30 23:23:05 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 30 23:23:05 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:23:06
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta']
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:06 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 30 23:23:06 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 30 23:23:06 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 30 23:23:06 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:23:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:23:07 np0005603435 python3.9[107508]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:23:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:09 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 30 23:23:09 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 30 23:23:09 np0005603435 python3.9[107661]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:23:10 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 30 23:23:10 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 30 23:23:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:10 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 30 23:23:10 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 30 23:23:11 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 30 23:23:11 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 30 23:23:11 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 30 23:23:11 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 30 23:23:11 np0005603435 python3.9[107814]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:23:12 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 30 23:23:12 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 30 23:23:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:12 np0005603435 python3.9[107966]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 30 23:23:13 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 30 23:23:13 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 30 23:23:13 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 30 23:23:13 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 30 23:23:13 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 30 23:23:13 np0005603435 python3.9[108116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:23:13 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 30 23:23:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 30 23:23:14 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 30 23:23:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:14 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 30 23:23:14 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 30 23:23:14 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 30 23:23:14 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 30 23:23:15 np0005603435 python3.9[108274]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:23:16 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 30 23:23:16 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:23:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:23:17 np0005603435 python3.9[108427]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:23:17 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 30 23:23:17 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 30 23:23:18 np0005603435 systemd[76696]: Created slice User Background Tasks Slice.
Jan 30 23:23:18 np0005603435 systemd[76696]: Starting Cleanup of User's Temporary Files and Directories...
Jan 30 23:23:18 np0005603435 systemd[76696]: Finished Cleanup of User's Temporary Files and Directories.
Jan 30 23:23:18 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 30 23:23:18 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 30 23:23:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:18 np0005603435 python3.9[108715]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 30 23:23:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:23:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:23:27 np0005603435 rsyslogd[1007]: imjournal: 323 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 30 23:23:27 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 30 23:23:27 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 30 23:23:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:28 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 30 23:23:28 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 30 23:23:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:29 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 30 23:23:29 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 30 23:23:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:31 np0005603435 systemd-logind[816]: New session 37 of user zuul.
Jan 30 23:23:31 np0005603435 systemd[1]: Started Session 37 of User zuul.
Jan 30 23:23:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:32 np0005603435 python3.9[110278]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:23:34 np0005603435 python3.9[110432]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:23:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:34 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 30 23:23:34 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 30 23:23:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 30 23:23:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 30 23:23:35 np0005603435 python3.9[110625]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:23:35 np0005603435 systemd[1]: session-37.scope: Deactivated successfully.
Jan 30 23:23:35 np0005603435 systemd[1]: session-37.scope: Consumed 2.560s CPU time.
Jan 30 23:23:35 np0005603435 systemd-logind[816]: Session 37 logged out. Waiting for processes to exit.
Jan 30 23:23:35 np0005603435 systemd-logind[816]: Removed session 37.
Jan 30 23:23:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:36 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 30 23:23:36 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 30 23:23:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:23:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:23:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:23:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:23:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:23:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:23:37 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 30 23:23:37 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 30 23:23:37 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 30 23:23:37 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 30 23:23:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:38 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 30 23:23:38 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 30 23:23:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:39 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 30 23:23:39 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 30 23:23:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:40 np0005603435 systemd-logind[816]: New session 38 of user zuul.
Jan 30 23:23:40 np0005603435 systemd[1]: Started Session 38 of User zuul.
Jan 30 23:23:41 np0005603435 python3.9[110804]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:23:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:42 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 30 23:23:42 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 30 23:23:42 np0005603435 python3.9[110958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:23:43 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 30 23:23:43 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 30 23:23:43 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 30 23:23:43 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 30 23:23:43 np0005603435 python3.9[111114]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:23:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:45 np0005603435 python3.9[111198]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:23:45 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 30 23:23:45 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 30 23:23:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:46 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 30 23:23:46 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 30 23:23:47 np0005603435 python3.9[111351]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:23:47 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 30 23:23:47 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 30 23:23:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:48 np0005603435 python3.9[111546]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:23:48 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 30 23:23:48 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 30 23:23:48 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 30 23:23:48 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 30 23:23:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:49 np0005603435 python3.9[111698]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:23:50 np0005603435 python3.9[111863]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:23:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:50 np0005603435 python3.9[111941]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:23:51 np0005603435 python3.9[112093]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:23:51 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 30 23:23:51 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 30 23:23:51 np0005603435 python3.9[112171]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:23:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:52 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 30 23:23:52 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 30 23:23:52 np0005603435 python3.9[112323]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:23:53 np0005603435 python3.9[112475]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:23:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:54 np0005603435 python3.9[112627]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:23:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 30 23:23:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 30 23:23:55 np0005603435 python3.9[112779]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:23:55 np0005603435 python3.9[112931]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:23:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:57 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 30 23:23:57 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 30 23:23:58 np0005603435 python3.9[113084]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:23:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:23:58 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 30 23:23:58 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 30 23:23:59 np0005603435 python3.9[113238]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:23:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:23:59 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 30 23:23:59 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 30 23:23:59 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 30 23:23:59 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 30 23:24:00 np0005603435 python3.9[113390]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:24:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:00 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 30 23:24:00 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 30 23:24:00 np0005603435 python3.9[113542]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:24:01 np0005603435 python3.9[113695]: ansible-service_facts Invoked
Jan 30 23:24:01 np0005603435 network[113712]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:24:01 np0005603435 network[113713]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:24:01 np0005603435 network[113714]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:24:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:02 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 30 23:24:02 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 30 23:24:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:04 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 30 23:24:04 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 30 23:24:04 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 30 23:24:04 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 30 23:24:04 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 30 23:24:04 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 30 23:24:05 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 30 23:24:05 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:24:06
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:06 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 30 23:24:06 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 30 23:24:06 np0005603435 python3.9[114166]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:24:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:24:07 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 30 23:24:07 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 30 23:24:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:08 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 30 23:24:08 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 30 23:24:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:09 np0005603435 python3.9[114319]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 30 23:24:09 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 30 23:24:09 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 30 23:24:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:10 np0005603435 python3.9[114471]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:11 np0005603435 python3.9[114549]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:11 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 30 23:24:11 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 30 23:24:11 np0005603435 python3.9[114701]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:12 np0005603435 python3.9[114779]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:13 np0005603435 python3.9[114931]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:14 np0005603435 python3.9[115083]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:24:16 np0005603435 python3.9[115167]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:24:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:24:16 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 30 23:24:16 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 30 23:24:16 np0005603435 systemd[1]: session-38.scope: Deactivated successfully.
Jan 30 23:24:16 np0005603435 systemd[1]: session-38.scope: Consumed 24.945s CPU time.
Jan 30 23:24:16 np0005603435 systemd-logind[816]: Session 38 logged out. Waiting for processes to exit.
Jan 30 23:24:16 np0005603435 systemd-logind[816]: Removed session 38.
Jan 30 23:24:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:19 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 30 23:24:19 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 30 23:24:19 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 30 23:24:19 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 30 23:24:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:20 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 30 23:24:20 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 30 23:24:22 np0005603435 systemd-logind[816]: New session 39 of user zuul.
Jan 30 23:24:22 np0005603435 systemd[1]: Started Session 39 of User zuul.
Jan 30 23:24:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:22 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 30 23:24:22 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 30 23:24:23 np0005603435 python3.9[115350]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.003587) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833464003700, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7250, "num_deletes": 251, "total_data_size": 9728132, "memory_usage": 9898736, "flush_reason": "Manual Compaction"}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833464048670, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7679778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7393, "table_properties": {"data_size": 7652768, "index_size": 17743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 75868, "raw_average_key_size": 23, "raw_value_size": 7589618, "raw_average_value_size": 2320, "num_data_blocks": 779, "num_entries": 3270, "num_filter_entries": 3270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833068, "oldest_key_time": 1769833068, "file_creation_time": 1769833464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 45244 microseconds, and 17495 cpu microseconds.
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.048837) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7679778 bytes OK
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.048889) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.050450) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.050466) EVENT_LOG_v1 {"time_micros": 1769833464050461, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.050499) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9696571, prev total WAL file size 9696571, number of live WAL files 2.
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.051953) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7499KB) 13(58KB) 8(1944B)]
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833464052083, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7741682, "oldest_snapshot_seqno": -1}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3096 keys, 7694630 bytes, temperature: kUnknown
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833464095915, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7694630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7667991, "index_size": 17825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 74308, "raw_average_key_size": 24, "raw_value_size": 7606138, "raw_average_value_size": 2456, "num_data_blocks": 783, "num_entries": 3096, "num_filter_entries": 3096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769833464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.096445) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7694630 bytes
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.098166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.8 rd, 174.7 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3385, records dropped: 289 output_compression: NoCompression
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.098200) EVENT_LOG_v1 {"time_micros": 1769833464098183, "job": 4, "event": "compaction_finished", "compaction_time_micros": 44047, "compaction_time_cpu_micros": 16171, "output_level": 6, "num_output_files": 1, "total_output_size": 7694630, "num_input_records": 3385, "num_output_records": 3096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833464099393, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833464099453, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833464099497, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:24:24.051808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:24:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:24 np0005603435 python3.9[115502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:24 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 30 23:24:24 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 30 23:24:24 np0005603435 python3.9[115581]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:24 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 30 23:24:24 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 30 23:24:25 np0005603435 systemd[1]: session-39.scope: Deactivated successfully.
Jan 30 23:24:25 np0005603435 systemd[1]: session-39.scope: Consumed 1.752s CPU time.
Jan 30 23:24:25 np0005603435 systemd-logind[816]: Session 39 logged out. Waiting for processes to exit.
Jan 30 23:24:25 np0005603435 systemd-logind[816]: Removed session 39.
Jan 30 23:24:25 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 30 23:24:25 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 30 23:24:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:26 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 30 23:24:26 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 30 23:24:26 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 30 23:24:26 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:24:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:24:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:24:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:24:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:24:27 np0005603435 podman[115751]: 2026-01-31 04:24:27.313164497 +0000 UTC m=+0.065062270 container create 9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_elion, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 30 23:24:27 np0005603435 systemd[1]: Started libpod-conmon-9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7.scope.
Jan 30 23:24:27 np0005603435 podman[115751]: 2026-01-31 04:24:27.284745572 +0000 UTC m=+0.036643385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:24:27 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:24:27 np0005603435 podman[115751]: 2026-01-31 04:24:27.414659405 +0000 UTC m=+0.166557228 container init 9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_elion, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:24:27 np0005603435 podman[115751]: 2026-01-31 04:24:27.424899731 +0000 UTC m=+0.176797504 container start 9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:24:27 np0005603435 podman[115751]: 2026-01-31 04:24:27.430427365 +0000 UTC m=+0.182325178 container attach 9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:24:27 np0005603435 nostalgic_elion[115768]: 167 167
Jan 30 23:24:27 np0005603435 systemd[1]: libpod-9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7.scope: Deactivated successfully.
Jan 30 23:24:27 np0005603435 conmon[115768]: conmon 9e6ca466d807a44703b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7.scope/container/memory.events
Jan 30 23:24:27 np0005603435 podman[115751]: 2026-01-31 04:24:27.434498239 +0000 UTC m=+0.186396022 container died 9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 30 23:24:27 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4ec26299530eb3084206a8e6a7c37923bef03f73262ac252d55fee39b9b726fe-merged.mount: Deactivated successfully.
Jan 30 23:24:27 np0005603435 podman[115751]: 2026-01-31 04:24:27.485954598 +0000 UTC m=+0.237852371 container remove 9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:24:27 np0005603435 systemd[1]: libpod-conmon-9e6ca466d807a44703b95c011324289a25a7af8f4ff4649a249e54f86ef5b1d7.scope: Deactivated successfully.
Jan 30 23:24:27 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 30 23:24:27 np0005603435 podman[115792]: 2026-01-31 04:24:27.691158345 +0000 UTC m=+0.066608073 container create 852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:24:27 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 30 23:24:27 np0005603435 systemd[1]: Started libpod-conmon-852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01.scope.
Jan 30 23:24:27 np0005603435 podman[115792]: 2026-01-31 04:24:27.66237705 +0000 UTC m=+0.037826818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:24:27 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:24:27 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed1b67cf7fa132192e64cf33094e8428d8502ee52fb459ecc1c94ff6bbdf3d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:27 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed1b67cf7fa132192e64cf33094e8428d8502ee52fb459ecc1c94ff6bbdf3d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:27 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed1b67cf7fa132192e64cf33094e8428d8502ee52fb459ecc1c94ff6bbdf3d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:27 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed1b67cf7fa132192e64cf33094e8428d8502ee52fb459ecc1c94ff6bbdf3d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:27 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed1b67cf7fa132192e64cf33094e8428d8502ee52fb459ecc1c94ff6bbdf3d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:27 np0005603435 podman[115792]: 2026-01-31 04:24:27.793548938 +0000 UTC m=+0.168998676 container init 852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:24:27 np0005603435 podman[115792]: 2026-01-31 04:24:27.806497389 +0000 UTC m=+0.181947117 container start 852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 30 23:24:27 np0005603435 podman[115792]: 2026-01-31 04:24:27.809864814 +0000 UTC m=+0.185314542 container attach 852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:24:28 np0005603435 jovial_shannon[115809]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:24:28 np0005603435 jovial_shannon[115809]: --> All data devices are unavailable
Jan 30 23:24:28 np0005603435 systemd[1]: libpod-852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01.scope: Deactivated successfully.
Jan 30 23:24:28 np0005603435 podman[115792]: 2026-01-31 04:24:28.377881614 +0000 UTC m=+0.753331342 container died 852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:24:28 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5ed1b67cf7fa132192e64cf33094e8428d8502ee52fb459ecc1c94ff6bbdf3d5-merged.mount: Deactivated successfully.
Jan 30 23:24:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:28 np0005603435 podman[115792]: 2026-01-31 04:24:28.428349195 +0000 UTC m=+0.803798913 container remove 852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:24:28 np0005603435 systemd[1]: libpod-conmon-852c98c84528b350aec32bc3b46167941ea70ee5faa8a505179d22ab935a4c01.scope: Deactivated successfully.
Jan 30 23:24:28 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 30 23:24:28 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 30 23:24:28 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 30 23:24:28 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 30 23:24:28 np0005603435 podman[115906]: 2026-01-31 04:24:28.976264363 +0000 UTC m=+0.058226469 container create 6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:24:29 np0005603435 systemd[1]: Started libpod-conmon-6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9.scope.
Jan 30 23:24:29 np0005603435 podman[115906]: 2026-01-31 04:24:28.951957243 +0000 UTC m=+0.033919359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:24:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:24:29 np0005603435 podman[115906]: 2026-01-31 04:24:29.079032006 +0000 UTC m=+0.160994172 container init 6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ritchie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:24:29 np0005603435 podman[115906]: 2026-01-31 04:24:29.087527034 +0000 UTC m=+0.169489140 container start 6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ritchie, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:24:29 np0005603435 podman[115906]: 2026-01-31 04:24:29.091951827 +0000 UTC m=+0.173913923 container attach 6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ritchie, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 30 23:24:29 np0005603435 xenodochial_ritchie[115922]: 167 167
Jan 30 23:24:29 np0005603435 systemd[1]: libpod-6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9.scope: Deactivated successfully.
Jan 30 23:24:29 np0005603435 podman[115906]: 2026-01-31 04:24:29.095041034 +0000 UTC m=+0.177003170 container died 6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ritchie, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:24:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a63f5d2b232e5e7aeeaa2201f68006fdab5a8d6a2f37677dcec2c742bc721492-merged.mount: Deactivated successfully.
Jan 30 23:24:29 np0005603435 podman[115906]: 2026-01-31 04:24:29.145426202 +0000 UTC m=+0.227388308 container remove 6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:24:29 np0005603435 systemd[1]: libpod-conmon-6751de5e535e3252aef71d401bc1fa698d13b81fdb79331867ae5996cdaad8e9.scope: Deactivated successfully.
Jan 30 23:24:29 np0005603435 podman[115944]: 2026-01-31 04:24:29.331966998 +0000 UTC m=+0.058786385 container create 2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:24:29 np0005603435 systemd[1]: Started libpod-conmon-2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12.scope.
Jan 30 23:24:29 np0005603435 podman[115944]: 2026-01-31 04:24:29.307075182 +0000 UTC m=+0.033894619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:24:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:24:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ee4fb0c034f71be3ae82c8916f414442568ac9749e93c88caa1a005be4b23f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ee4fb0c034f71be3ae82c8916f414442568ac9749e93c88caa1a005be4b23f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ee4fb0c034f71be3ae82c8916f414442568ac9749e93c88caa1a005be4b23f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ee4fb0c034f71be3ae82c8916f414442568ac9749e93c88caa1a005be4b23f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:29 np0005603435 podman[115944]: 2026-01-31 04:24:29.446823979 +0000 UTC m=+0.173643426 container init 2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:24:29 np0005603435 podman[115944]: 2026-01-31 04:24:29.461586462 +0000 UTC m=+0.188405859 container start 2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:24:29 np0005603435 podman[115944]: 2026-01-31 04:24:29.46760804 +0000 UTC m=+0.194427487 container attach 2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:24:29 np0005603435 great_leakey[115961]: {
Jan 30 23:24:29 np0005603435 great_leakey[115961]:    "0": [
Jan 30 23:24:29 np0005603435 great_leakey[115961]:        {
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "devices": [
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "/dev/loop3"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            ],
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_name": "ceph_lv0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_size": "21470642176",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "name": "ceph_lv0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "tags": {
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cluster_name": "ceph",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.crush_device_class": "",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.encrypted": "0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.objectstore": "bluestore",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osd_id": "0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.type": "block",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.vdo": "0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.with_tpm": "0"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            },
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "type": "block",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "vg_name": "ceph_vg0"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:        }
Jan 30 23:24:29 np0005603435 great_leakey[115961]:    ],
Jan 30 23:24:29 np0005603435 great_leakey[115961]:    "1": [
Jan 30 23:24:29 np0005603435 great_leakey[115961]:        {
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "devices": [
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "/dev/loop4"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            ],
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_name": "ceph_lv1",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_size": "21470642176",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "name": "ceph_lv1",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "tags": {
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cluster_name": "ceph",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.crush_device_class": "",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.encrypted": "0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.objectstore": "bluestore",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osd_id": "1",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.type": "block",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.vdo": "0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.with_tpm": "0"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            },
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "type": "block",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "vg_name": "ceph_vg1"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:        }
Jan 30 23:24:29 np0005603435 great_leakey[115961]:    ],
Jan 30 23:24:29 np0005603435 great_leakey[115961]:    "2": [
Jan 30 23:24:29 np0005603435 great_leakey[115961]:        {
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "devices": [
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "/dev/loop5"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            ],
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_name": "ceph_lv2",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_size": "21470642176",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "name": "ceph_lv2",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "tags": {
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.cluster_name": "ceph",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.crush_device_class": "",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.encrypted": "0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.objectstore": "bluestore",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osd_id": "2",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.type": "block",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.vdo": "0",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:                "ceph.with_tpm": "0"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            },
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "type": "block",
Jan 30 23:24:29 np0005603435 great_leakey[115961]:            "vg_name": "ceph_vg2"
Jan 30 23:24:29 np0005603435 great_leakey[115961]:        }
Jan 30 23:24:29 np0005603435 great_leakey[115961]:    ]
Jan 30 23:24:29 np0005603435 great_leakey[115961]: }
Jan 30 23:24:29 np0005603435 systemd[1]: libpod-2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12.scope: Deactivated successfully.
Jan 30 23:24:29 np0005603435 podman[115944]: 2026-01-31 04:24:29.810559208 +0000 UTC m=+0.537378605 container died 2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 30 23:24:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a8ee4fb0c034f71be3ae82c8916f414442568ac9749e93c88caa1a005be4b23f-merged.mount: Deactivated successfully.
Jan 30 23:24:29 np0005603435 podman[115944]: 2026-01-31 04:24:29.856250166 +0000 UTC m=+0.583069533 container remove 2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 30 23:24:29 np0005603435 systemd[1]: libpod-conmon-2e1e3cf8821c8ac8473478c2d58e84be7e7590f02e393f11c5dfa1831af28c12.scope: Deactivated successfully.
Jan 30 23:24:29 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 30 23:24:29 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 30 23:24:30 np0005603435 systemd-logind[816]: New session 40 of user zuul.
Jan 30 23:24:30 np0005603435 systemd[1]: Started Session 40 of User zuul.
Jan 30 23:24:30 np0005603435 podman[116047]: 2026-01-31 04:24:30.396290274 +0000 UTC m=+0.054656339 container create 307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:24:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:30 np0005603435 systemd[1]: Started libpod-conmon-307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f.scope.
Jan 30 23:24:30 np0005603435 podman[116047]: 2026-01-31 04:24:30.377945871 +0000 UTC m=+0.036311926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:24:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:24:30 np0005603435 podman[116047]: 2026-01-31 04:24:30.491952108 +0000 UTC m=+0.150318163 container init 307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rosalind, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:24:30 np0005603435 podman[116047]: 2026-01-31 04:24:30.498883512 +0000 UTC m=+0.157249547 container start 307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:24:30 np0005603435 podman[116047]: 2026-01-31 04:24:30.50203696 +0000 UTC m=+0.160403025 container attach 307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:24:30 np0005603435 distracted_rosalind[116082]: 167 167
Jan 30 23:24:30 np0005603435 systemd[1]: libpod-307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f.scope: Deactivated successfully.
Jan 30 23:24:30 np0005603435 podman[116047]: 2026-01-31 04:24:30.505720153 +0000 UTC m=+0.164086228 container died 307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:24:30 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7c20a606edd418130a1ebd39f5f5b065e4003e76c28cdc564df0199c938d2bdd-merged.mount: Deactivated successfully.
Jan 30 23:24:30 np0005603435 podman[116047]: 2026-01-31 04:24:30.554039974 +0000 UTC m=+0.212406039 container remove 307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:24:30 np0005603435 systemd[1]: libpod-conmon-307653a816af4547f6cfde9f12258bb83810797096429f93f88668b923df0e5f.scope: Deactivated successfully.
Jan 30 23:24:30 np0005603435 podman[116140]: 2026-01-31 04:24:30.689712107 +0000 UTC m=+0.048275140 container create d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 30 23:24:30 np0005603435 systemd[1]: Started libpod-conmon-d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1.scope.
Jan 30 23:24:30 np0005603435 podman[116140]: 2026-01-31 04:24:30.666778506 +0000 UTC m=+0.025341569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:24:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:24:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f0347eb0511b96e9be9a74c1a0126b7281376bc37b79c6c9750f3f60c772a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f0347eb0511b96e9be9a74c1a0126b7281376bc37b79c6c9750f3f60c772a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f0347eb0511b96e9be9a74c1a0126b7281376bc37b79c6c9750f3f60c772a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f0347eb0511b96e9be9a74c1a0126b7281376bc37b79c6c9750f3f60c772a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:24:30 np0005603435 podman[116140]: 2026-01-31 04:24:30.804009333 +0000 UTC m=+0.162572446 container init d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:24:30 np0005603435 podman[116140]: 2026-01-31 04:24:30.8114185 +0000 UTC m=+0.169981563 container start d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:24:30 np0005603435 podman[116140]: 2026-01-31 04:24:30.816644936 +0000 UTC m=+0.175208039 container attach d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:24:31 np0005603435 python3.9[116277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:24:31 np0005603435 lvm[116338]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:24:31 np0005603435 lvm[116338]: VG ceph_vg1 finished
Jan 30 23:24:31 np0005603435 lvm[116337]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:24:31 np0005603435 lvm[116337]: VG ceph_vg0 finished
Jan 30 23:24:31 np0005603435 lvm[116340]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:24:31 np0005603435 lvm[116340]: VG ceph_vg2 finished
Jan 30 23:24:31 np0005603435 busy_mccarthy[116157]: {}
Jan 30 23:24:31 np0005603435 systemd[1]: libpod-d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1.scope: Deactivated successfully.
Jan 30 23:24:31 np0005603435 systemd[1]: libpod-d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1.scope: Consumed 1.239s CPU time.
Jan 30 23:24:31 np0005603435 podman[116140]: 2026-01-31 04:24:31.679633994 +0000 UTC m=+1.038197057 container died d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:24:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-98f0347eb0511b96e9be9a74c1a0126b7281376bc37b79c6c9750f3f60c772a8-merged.mount: Deactivated successfully.
Jan 30 23:24:31 np0005603435 podman[116140]: 2026-01-31 04:24:31.743098008 +0000 UTC m=+1.101661071 container remove d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:24:31 np0005603435 systemd[1]: libpod-conmon-d04f1d6c48f974cedcde59ec803f3fc1b91a22387c25f01c31e7749644b6b0e1.scope: Deactivated successfully.
Jan 30 23:24:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:24:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:24:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:24:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:24:31 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 30 23:24:31 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 30 23:24:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:32 np0005603435 python3.9[116533]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:24:32 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:24:33 np0005603435 python3.9[116708]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:33 np0005603435 python3.9[116786]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.m8753gez recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 30 23:24:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 30 23:24:34 np0005603435 python3.9[116938]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:35 np0005603435 python3.9[117016]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.f958d8nq recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:36 np0005603435 python3.9[117168]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:24:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:36 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 30 23:24:36 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 30 23:24:36 np0005603435 python3.9[117320]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:24:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:24:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:24:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:24:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:24:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:24:36 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 30 23:24:36 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 30 23:24:37 np0005603435 python3.9[117398]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:24:37 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 30 23:24:37 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 30 23:24:37 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 30 23:24:37 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 30 23:24:38 np0005603435 python3.9[117552]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:38 np0005603435 python3.9[117630]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:24:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:39 np0005603435 python3.9[117782]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:39 np0005603435 python3.9[117934]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:40 np0005603435 python3.9[118013]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:41 np0005603435 python3.9[118165]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:41 np0005603435 python3.9[118243]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:42 np0005603435 python3.9[118395]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:24:42 np0005603435 systemd[1]: Reloading.
Jan 30 23:24:42 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:24:42 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:24:43 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 30 23:24:43 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 30 23:24:43 np0005603435 python3.9[118584]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:44 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 30 23:24:44 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 30 23:24:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:44 np0005603435 python3.9[118662]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:44 np0005603435 python3.9[118814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:45 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 30 23:24:45 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 30 23:24:45 np0005603435 python3.9[118892]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:45 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 30 23:24:45 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 30 23:24:46 np0005603435 python3.9[119044]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:24:46 np0005603435 systemd[1]: Reloading.
Jan 30 23:24:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:46 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:24:46 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:24:46 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 30 23:24:46 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 30 23:24:46 np0005603435 systemd[1]: Starting Create netns directory...
Jan 30 23:24:46 np0005603435 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 30 23:24:46 np0005603435 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 30 23:24:46 np0005603435 systemd[1]: Finished Create netns directory.
Jan 30 23:24:46 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 30 23:24:46 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 30 23:24:47 np0005603435 python3.9[119235]: ansible-ansible.builtin.service_facts Invoked
Jan 30 23:24:47 np0005603435 network[119252]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:24:47 np0005603435 network[119253]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:24:47 np0005603435 network[119254]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:24:47 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 30 23:24:47 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 30 23:24:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:48 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 30 23:24:48 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 30 23:24:48 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 30 23:24:48 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 30 23:24:48 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 30 23:24:48 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 30 23:24:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:49 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 30 23:24:49 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 30 23:24:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:50 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 30 23:24:50 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 30 23:24:50 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 30 23:24:50 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 30 23:24:51 np0005603435 python3.9[119516]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:51 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 30 23:24:51 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 30 23:24:51 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 30 23:24:51 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 30 23:24:51 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 30 23:24:51 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 30 23:24:52 np0005603435 python3.9[119594]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:52 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 30 23:24:52 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 30 23:24:53 np0005603435 python3.9[119746]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:53 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 30 23:24:53 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 30 23:24:53 np0005603435 ceph-osd[87920]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 30 23:24:53 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 30 23:24:53 np0005603435 python3.9[119898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:54 np0005603435 python3.9[119976]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 30 23:24:54 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 30 23:24:55 np0005603435 python3.9[120128]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 30 23:24:55 np0005603435 systemd[1]: Starting Time & Date Service...
Jan 30 23:24:55 np0005603435 systemd[1]: Started Time & Date Service.
Jan 30 23:24:56 np0005603435 python3.9[120284]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:56 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 30 23:24:56 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 30 23:24:57 np0005603435 python3.9[120436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:57 np0005603435 python3.9[120514]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:57 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 30 23:24:57 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 30 23:24:58 np0005603435 python3.9[120666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:24:58 np0005603435 python3.9[120744]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.knrr86lr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:24:58 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 30 23:24:58 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 30 23:24:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:24:59 np0005603435 python3.9[120896]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:24:59 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 30 23:24:59 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 30 23:24:59 np0005603435 python3.9[120974]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:00 np0005603435 python3.9[121126]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:25:01 np0005603435 python3[121279]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 30 23:25:01 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 30 23:25:01 np0005603435 ceph-osd[85822]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 30 23:25:02 np0005603435 python3.9[121431]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:25:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:02 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 30 23:25:02 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 30 23:25:02 np0005603435 python3.9[121509]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:03 np0005603435 python3.9[121661]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:25:03 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 30 23:25:03 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 30 23:25:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:04 np0005603435 python3.9[121786]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833503.0577044-308-196794424086204/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:04 np0005603435 python3.9[121938]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:25:05 np0005603435 python3.9[122016]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:06 np0005603435 python3.9[122168]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:25:06
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'vms']
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:06 np0005603435 python3.9[122246]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:25:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:25:07 np0005603435 python3.9[122398]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:25:07 np0005603435 python3.9[122476]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:08 np0005603435 python3.9[122628]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:25:08 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 30 23:25:08 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 30 23:25:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:09 np0005603435 python3.9[122783]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:09 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 30 23:25:09 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 30 23:25:10 np0005603435 python3.9[122935]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:10 np0005603435 python3.9[123087]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:11 np0005603435 python3.9[123239]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 30 23:25:11 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 30 23:25:11 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 30 23:25:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:12 np0005603435 python3.9[123391]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 30 23:25:12 np0005603435 systemd[1]: session-40.scope: Deactivated successfully.
Jan 30 23:25:12 np0005603435 systemd[1]: session-40.scope: Consumed 31.184s CPU time.
Jan 30 23:25:12 np0005603435 systemd-logind[816]: Session 40 logged out. Waiting for processes to exit.
Jan 30 23:25:12 np0005603435 systemd-logind[816]: Removed session 40.
Jan 30 23:25:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:14 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 30 23:25:14 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:25:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:25:17 np0005603435 systemd-logind[816]: New session 41 of user zuul.
Jan 30 23:25:17 np0005603435 systemd[1]: Started Session 41 of User zuul.
Jan 30 23:25:17 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 30 23:25:17 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 30 23:25:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:18 np0005603435 python3.9[123572]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 30 23:25:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:19 np0005603435 python3.9[123724]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:25:20 np0005603435 python3.9[123878]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 30 23:25:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:20 np0005603435 python3.9[124030]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.8062cn_g follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:25:20 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 30 23:25:20 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 30 23:25:21 np0005603435 python3.9[124155]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.8062cn_g mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833520.387742-44-273582129896301/.source.8062cn_g _original_basename=.0z4dcd52 follow=False checksum=9e5e4c33d94c93f3207c1ab068299dbd70d30ddf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:22 np0005603435 python3.9[124307]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:25:22 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 30 23:25:22 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 30 23:25:23 np0005603435 python3.9[124460]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGQxtpB4QkMB44gxnODjQJf9hqMcT11PupnYsKJqkL8KIwvW19mR2t3GusmAh3ls8s+Uvrf90eL7UCOkPryyfFZVoca6HEM751NZGlOPXAbYwd9N7xdXlNQcNKL6/NhkELWoQEY6FbeJtIGFuztlL8BBujH35ykR+nU2f8LJ6n4H9iFBiUKmR3cL27BiShT4M5XoWXWk6WQUKtfLJyHDlO22e3wM2s46EdwlHCjO9G31+ZC5Syyo+J9j5kKEF/Ni6bf85LP9LNXQA/fF0L4pParenf2GP5UbqidnkBelmmZTKPHmP/7gqCiVeDUd9TSxDHaRzCBlpZMVF5Q+Ymd7yJm0762FpwIxJmXKLNn6d/feS78rtrJ6ddNsUiNL81zuzG+vG+2rXKBk1iBhgqH3emnKhu6K3zNjHI37M45ZRECiP3d+MScncE7gVh4yH/DuENZqBQbnyg5659pVzK7cmo2PzlorptrAUOTerzpH/1GouAvw2KU7VUwYZ1eM17k+U=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMpBzDyT3QqEMyHu/pbcKb4cYXF9Jqh9RqwzOHUt0qjr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNGxfOuKWJoWAkU0LFOcNkfeFOZ36yy4OzL9FzbJ3Q0W0SWhgpdh4a7FHRJ8jpW4ccTddKCeMEgfFAyomIrJU4Q=#012 create=True mode=0644 path=/tmp/ansible.8062cn_g state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:24 np0005603435 python3.9[124613]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8062cn_g' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:25:24 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 30 23:25:24 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 30 23:25:25 np0005603435 python3.9[124767]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.8062cn_g state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:25 np0005603435 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 30 23:25:25 np0005603435 systemd[1]: session-41.scope: Deactivated successfully.
Jan 30 23:25:25 np0005603435 systemd[1]: session-41.scope: Consumed 5.046s CPU time.
Jan 30 23:25:25 np0005603435 systemd-logind[816]: Session 41 logged out. Waiting for processes to exit.
Jan 30 23:25:25 np0005603435 systemd-logind[816]: Removed session 41.
Jan 30 23:25:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:26 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 30 23:25:26 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 30 23:25:27 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 30 23:25:27 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 30 23:25:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:30 np0005603435 systemd-logind[816]: New session 42 of user zuul.
Jan 30 23:25:30 np0005603435 systemd[1]: Started Session 42 of User zuul.
Jan 30 23:25:31 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 30 23:25:31 np0005603435 python3.9[124948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:25:31 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 30 23:25:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:25:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:25:33 np0005603435 podman[125247]: 2026-01-31 04:25:33.041283538 +0000 UTC m=+0.038484904 container create 26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:25:33 np0005603435 systemd[1]: Started libpod-conmon-26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e.scope.
Jan 30 23:25:33 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:25:33 np0005603435 podman[125247]: 2026-01-31 04:25:33.113658722 +0000 UTC m=+0.110860128 container init 26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_kalam, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:25:33 np0005603435 podman[125247]: 2026-01-31 04:25:33.022827756 +0000 UTC m=+0.020029172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:25:33 np0005603435 podman[125247]: 2026-01-31 04:25:33.118783748 +0000 UTC m=+0.115985134 container start 26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:25:33 np0005603435 podman[125247]: 2026-01-31 04:25:33.121682019 +0000 UTC m=+0.118883405 container attach 26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:25:33 np0005603435 mystifying_kalam[125264]: 167 167
Jan 30 23:25:33 np0005603435 systemd[1]: libpod-26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e.scope: Deactivated successfully.
Jan 30 23:25:33 np0005603435 conmon[125264]: conmon 26a7341e2fa337c002a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e.scope/container/memory.events
Jan 30 23:25:33 np0005603435 podman[125247]: 2026-01-31 04:25:33.123516634 +0000 UTC m=+0.120718000 container died 26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:25:33 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b8bcb46476ae67cf4281dcc4654f3e0f384c72db292a43768c02b871dcabd674-merged.mount: Deactivated successfully.
Jan 30 23:25:33 np0005603435 python3.9[125234]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 30 23:25:33 np0005603435 podman[125247]: 2026-01-31 04:25:33.152760031 +0000 UTC m=+0.149961397 container remove 26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_kalam, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:25:33 np0005603435 systemd[1]: libpod-conmon-26a7341e2fa337c002a7fa3b7b52451873c305e945297acbad589bf55802d00e.scope: Deactivated successfully.
Jan 30 23:25:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:25:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:25:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:25:33 np0005603435 podman[125290]: 2026-01-31 04:25:33.26452783 +0000 UTC m=+0.038734390 container create 2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:25:33 np0005603435 systemd[1]: Started libpod-conmon-2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624.scope.
Jan 30 23:25:33 np0005603435 podman[125290]: 2026-01-31 04:25:33.2477705 +0000 UTC m=+0.021977090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:25:33 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:25:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c795a7963b21a20b2648b434bbd49e8c44b95a9dfe6d9f240cf5664797f0cdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c795a7963b21a20b2648b434bbd49e8c44b95a9dfe6d9f240cf5664797f0cdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c795a7963b21a20b2648b434bbd49e8c44b95a9dfe6d9f240cf5664797f0cdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c795a7963b21a20b2648b434bbd49e8c44b95a9dfe6d9f240cf5664797f0cdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c795a7963b21a20b2648b434bbd49e8c44b95a9dfe6d9f240cf5664797f0cdc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:33 np0005603435 podman[125290]: 2026-01-31 04:25:33.370609491 +0000 UTC m=+0.144816071 container init 2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:25:33 np0005603435 podman[125290]: 2026-01-31 04:25:33.382450871 +0000 UTC m=+0.156657451 container start 2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:25:33 np0005603435 podman[125290]: 2026-01-31 04:25:33.385837254 +0000 UTC m=+0.160043824 container attach 2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:25:33 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 30 23:25:33 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 30 23:25:33 np0005603435 recursing_cray[125330]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:25:33 np0005603435 recursing_cray[125330]: --> All data devices are unavailable
Jan 30 23:25:33 np0005603435 systemd[1]: libpod-2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624.scope: Deactivated successfully.
Jan 30 23:25:33 np0005603435 podman[125290]: 2026-01-31 04:25:33.916365058 +0000 UTC m=+0.690571658 container died 2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:25:33 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5c795a7963b21a20b2648b434bbd49e8c44b95a9dfe6d9f240cf5664797f0cdc-merged.mount: Deactivated successfully.
Jan 30 23:25:33 np0005603435 podman[125290]: 2026-01-31 04:25:33.972967806 +0000 UTC m=+0.747174396 container remove 2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:25:33 np0005603435 systemd[1]: libpod-conmon-2c8424e4fcf4fe6e3f1d963021e1c36d853099eb10ead94d3b383769ebaef624.scope: Deactivated successfully.
Jan 30 23:25:34 np0005603435 python3.9[125469]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:25:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:34 np0005603435 podman[125630]: 2026-01-31 04:25:34.45288786 +0000 UTC m=+0.066390299 container create 7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:25:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:34 np0005603435 systemd[1]: Started libpod-conmon-7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e.scope.
Jan 30 23:25:34 np0005603435 podman[125630]: 2026-01-31 04:25:34.423341775 +0000 UTC m=+0.036844284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:25:34 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:25:34 np0005603435 podman[125630]: 2026-01-31 04:25:34.532717456 +0000 UTC m=+0.146219905 container init 7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:25:34 np0005603435 podman[125630]: 2026-01-31 04:25:34.541640985 +0000 UTC m=+0.155143404 container start 7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:25:34 np0005603435 podman[125630]: 2026-01-31 04:25:34.544816663 +0000 UTC m=+0.158319152 container attach 7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:25:34 np0005603435 kind_darwin[125669]: 167 167
Jan 30 23:25:34 np0005603435 systemd[1]: libpod-7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e.scope: Deactivated successfully.
Jan 30 23:25:34 np0005603435 podman[125630]: 2026-01-31 04:25:34.548528074 +0000 UTC m=+0.162030533 container died 7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:25:34 np0005603435 systemd[1]: var-lib-containers-storage-overlay-67993384162a4e97d894c5b3449666d0adbe84a338817b555194a7e60604017c-merged.mount: Deactivated successfully.
Jan 30 23:25:34 np0005603435 podman[125630]: 2026-01-31 04:25:34.588863373 +0000 UTC m=+0.202365802 container remove 7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:25:34 np0005603435 systemd[1]: libpod-conmon-7e535370add88618cc1255566f103d1bc6eb272af6d3273e6f89bdaf330a994e.scope: Deactivated successfully.
Jan 30 23:25:34 np0005603435 podman[125746]: 2026-01-31 04:25:34.755719663 +0000 UTC m=+0.063049977 container create 5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:25:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 30 23:25:34 np0005603435 systemd[1]: Started libpod-conmon-5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6.scope.
Jan 30 23:25:34 np0005603435 podman[125746]: 2026-01-31 04:25:34.728748092 +0000 UTC m=+0.036078486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:25:34 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:25:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e028ef1d4e6fc27a01476112d50b7cf96715d1f21445a3620af78ff6167096/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e028ef1d4e6fc27a01476112d50b7cf96715d1f21445a3620af78ff6167096/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e028ef1d4e6fc27a01476112d50b7cf96715d1f21445a3620af78ff6167096/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e028ef1d4e6fc27a01476112d50b7cf96715d1f21445a3620af78ff6167096/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:34 np0005603435 python3.9[125740]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:25:34 np0005603435 podman[125746]: 2026-01-31 04:25:34.84861284 +0000 UTC m=+0.155943154 container init 5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mahavira, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 30 23:25:34 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 30 23:25:34 np0005603435 podman[125746]: 2026-01-31 04:25:34.857565649 +0000 UTC m=+0.164895973 container start 5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:25:34 np0005603435 podman[125746]: 2026-01-31 04:25:34.861581888 +0000 UTC m=+0.168912192 container attach 5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mahavira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]: {
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:    "0": [
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:        {
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "devices": [
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "/dev/loop3"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            ],
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_name": "ceph_lv0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_size": "21470642176",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "name": "ceph_lv0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "tags": {
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cluster_name": "ceph",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.crush_device_class": "",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.encrypted": "0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.objectstore": "bluestore",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osd_id": "0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.type": "block",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.vdo": "0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.with_tpm": "0"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            },
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "type": "block",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "vg_name": "ceph_vg0"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:        }
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:    ],
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:    "1": [
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:        {
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "devices": [
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "/dev/loop4"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            ],
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_name": "ceph_lv1",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_size": "21470642176",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "name": "ceph_lv1",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "tags": {
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cluster_name": "ceph",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.crush_device_class": "",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.encrypted": "0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.objectstore": "bluestore",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osd_id": "1",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.type": "block",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.vdo": "0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.with_tpm": "0"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            },
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "type": "block",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "vg_name": "ceph_vg1"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:        }
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:    ],
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:    "2": [
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:        {
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "devices": [
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "/dev/loop5"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            ],
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_name": "ceph_lv2",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_size": "21470642176",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "name": "ceph_lv2",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "tags": {
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.cluster_name": "ceph",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.crush_device_class": "",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.encrypted": "0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.objectstore": "bluestore",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osd_id": "2",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.type": "block",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.vdo": "0",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:                "ceph.with_tpm": "0"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            },
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "type": "block",
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:            "vg_name": "ceph_vg2"
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:        }
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]:    ]
Jan 30 23:25:35 np0005603435 busy_mahavira[125763]: }
Jan 30 23:25:35 np0005603435 systemd[1]: libpod-5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6.scope: Deactivated successfully.
Jan 30 23:25:35 np0005603435 conmon[125763]: conmon 5467a59f2697450b9ad9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6.scope/container/memory.events
Jan 30 23:25:35 np0005603435 podman[125746]: 2026-01-31 04:25:35.19986761 +0000 UTC m=+0.507197934 container died 5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mahavira, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:25:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-24e028ef1d4e6fc27a01476112d50b7cf96715d1f21445a3620af78ff6167096-merged.mount: Deactivated successfully.
Jan 30 23:25:35 np0005603435 podman[125746]: 2026-01-31 04:25:35.256433646 +0000 UTC m=+0.563763950 container remove 5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:25:35 np0005603435 systemd[1]: libpod-conmon-5467a59f2697450b9ad92f8149af3ba7a75de4256dfd5a4bf758a3ea24ae95b6.scope: Deactivated successfully.
Jan 30 23:25:35 np0005603435 python3.9[125980]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:25:35 np0005603435 podman[126025]: 2026-01-31 04:25:35.752919496 +0000 UTC m=+0.042494362 container create 176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_albattani, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:25:35 np0005603435 systemd[1]: Started libpod-conmon-176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e.scope.
Jan 30 23:25:35 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:25:35 np0005603435 podman[126025]: 2026-01-31 04:25:35.808172601 +0000 UTC m=+0.097747517 container init 176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:25:35 np0005603435 podman[126025]: 2026-01-31 04:25:35.812244381 +0000 UTC m=+0.101819257 container start 176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:25:35 np0005603435 podman[126025]: 2026-01-31 04:25:35.81627539 +0000 UTC m=+0.105850296 container attach 176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:25:35 np0005603435 angry_albattani[126041]: 167 167
Jan 30 23:25:35 np0005603435 systemd[1]: libpod-176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e.scope: Deactivated successfully.
Jan 30 23:25:35 np0005603435 podman[126025]: 2026-01-31 04:25:35.819294224 +0000 UTC m=+0.108869150 container died 176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_albattani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:25:35 np0005603435 podman[126025]: 2026-01-31 04:25:35.735413487 +0000 UTC m=+0.024988373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:25:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5c5b0d1d3b1bdcbd5b68c5d50e429334607e2bc74f871c7253b850d37bddc2ba-merged.mount: Deactivated successfully.
Jan 30 23:25:35 np0005603435 podman[126025]: 2026-01-31 04:25:35.855751647 +0000 UTC m=+0.145326543 container remove 176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:25:35 np0005603435 systemd[1]: libpod-conmon-176db1067072d975914167e98ce62e8161370cc099929cd4ad56dd4642eb5b9e.scope: Deactivated successfully.
Jan 30 23:25:36 np0005603435 podman[126117]: 2026-01-31 04:25:36.028478451 +0000 UTC m=+0.058787152 container create 137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:25:36 np0005603435 systemd[1]: Started libpod-conmon-137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462.scope.
Jan 30 23:25:36 np0005603435 podman[126117]: 2026-01-31 04:25:36.004604146 +0000 UTC m=+0.034912877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:25:36 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:25:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d718ab9e3f0aa16df84d52841c65c0d2c8a924dc41b5afea06a7e2d3b10ce1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d718ab9e3f0aa16df84d52841c65c0d2c8a924dc41b5afea06a7e2d3b10ce1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d718ab9e3f0aa16df84d52841c65c0d2c8a924dc41b5afea06a7e2d3b10ce1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d718ab9e3f0aa16df84d52841c65c0d2c8a924dc41b5afea06a7e2d3b10ce1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:25:36 np0005603435 podman[126117]: 2026-01-31 04:25:36.34459771 +0000 UTC m=+0.374906471 container init 137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:25:36 np0005603435 podman[126117]: 2026-01-31 04:25:36.356745238 +0000 UTC m=+0.387053939 container start 137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wu, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:25:36 np0005603435 podman[126117]: 2026-01-31 04:25:36.361001592 +0000 UTC m=+0.391310323 container attach 137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:25:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:36 np0005603435 python3.9[126214]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:25:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:25:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:25:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:25:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:25:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:25:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:25:37 np0005603435 lvm[126313]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:25:37 np0005603435 lvm[126313]: VG ceph_vg1 finished
Jan 30 23:25:37 np0005603435 lvm[126312]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:25:37 np0005603435 lvm[126312]: VG ceph_vg0 finished
Jan 30 23:25:37 np0005603435 lvm[126315]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:25:37 np0005603435 lvm[126315]: VG ceph_vg2 finished
Jan 30 23:25:37 np0005603435 systemd[1]: session-42.scope: Deactivated successfully.
Jan 30 23:25:37 np0005603435 systemd[1]: session-42.scope: Consumed 3.819s CPU time.
Jan 30 23:25:37 np0005603435 systemd-logind[816]: Session 42 logged out. Waiting for processes to exit.
Jan 30 23:25:37 np0005603435 systemd-logind[816]: Removed session 42.
Jan 30 23:25:37 np0005603435 objective_wu[126134]: {}
Jan 30 23:25:37 np0005603435 systemd[1]: libpod-137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462.scope: Deactivated successfully.
Jan 30 23:25:37 np0005603435 systemd[1]: libpod-137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462.scope: Consumed 1.176s CPU time.
Jan 30 23:25:37 np0005603435 podman[126117]: 2026-01-31 04:25:37.166460075 +0000 UTC m=+1.196768816 container died 137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:25:37 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8d718ab9e3f0aa16df84d52841c65c0d2c8a924dc41b5afea06a7e2d3b10ce1f-merged.mount: Deactivated successfully.
Jan 30 23:25:37 np0005603435 podman[126117]: 2026-01-31 04:25:37.220700454 +0000 UTC m=+1.251009185 container remove 137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_wu, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:25:37 np0005603435 systemd[1]: libpod-conmon-137cf3734454408e2297dfd28ca36bc5f204da94db35a150540355a52d64f462.scope: Deactivated successfully.
Jan 30 23:25:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:25:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:25:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:25:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:25:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:25:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:25:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:39 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 30 23:25:39 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 30 23:25:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:41 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 30 23:25:41 np0005603435 ceph-osd[86873]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 30 23:25:41 np0005603435 systemd-logind[816]: New session 43 of user zuul.
Jan 30 23:25:42 np0005603435 systemd[1]: Started Session 43 of User zuul.
Jan 30 23:25:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:43 np0005603435 python3.9[126508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:25:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:44 np0005603435 python3.9[126664]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:25:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:45 np0005603435 python3.9[126748]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 30 23:25:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:47 np0005603435 python3.9[126899]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:25:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:48 np0005603435 python3.9[127050]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 30 23:25:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:49 np0005603435 python3.9[127200]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:25:50 np0005603435 python3.9[127350]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:25:50 np0005603435 systemd[1]: session-43.scope: Deactivated successfully.
Jan 30 23:25:50 np0005603435 systemd[1]: session-43.scope: Consumed 5.965s CPU time.
Jan 30 23:25:50 np0005603435 systemd-logind[816]: Session 43 logged out. Waiting for processes to exit.
Jan 30 23:25:50 np0005603435 systemd-logind[816]: Removed session 43.
Jan 30 23:25:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:53 np0005603435 systemd[1]: session-18.scope: Deactivated successfully.
Jan 30 23:25:53 np0005603435 systemd[1]: session-18.scope: Consumed 1min 33.698s CPU time.
Jan 30 23:25:53 np0005603435 systemd-logind[816]: Session 18 logged out. Waiting for processes to exit.
Jan 30 23:25:53 np0005603435 systemd-logind[816]: Removed session 18.
Jan 30 23:25:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:56 np0005603435 systemd-logind[816]: New session 44 of user zuul.
Jan 30 23:25:56 np0005603435 systemd[1]: Started Session 44 of User zuul.
Jan 30 23:25:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:57 np0005603435 python3.9[127528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:25:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:25:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:25:59 np0005603435 python3.9[127684]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:25:59 np0005603435 python3.9[127836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:00 np0005603435 python3.9[127988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:01 np0005603435 python3.9[128111]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833560.0194066-60-31172213227463/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e7beee3c32487f8b2840eec491eb6247bc010382 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:02 np0005603435 python3.9[128263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:02 np0005603435 python3.9[128386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833561.5781407-60-145833139434026/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=868129f79427edaf19ca72f8670ebfbfb687c6d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:03 np0005603435 python3.9[128538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:03 np0005603435 python3.9[128661]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833562.9140651-60-108935636561340/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=923c7a0f21c2cc4a790bd49a02565383abcfd6cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:04 np0005603435 python3.9[128813]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:05 np0005603435 python3.9[128965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:05 np0005603435 python3.9[129117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:06 np0005603435 python3.9[129240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833565.3683627-119-75122761422639/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2f533b75e0e2d14009fd3126f80a01587a6dc0da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:26:06
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'vms']
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:06 np0005603435 python3.9[129392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:26:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:26:07 np0005603435 python3.9[129515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833566.3990853-119-53724887984323/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d35a92561f6c5c92fffe8bac879ed95466756c17 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:07 np0005603435 python3.9[129667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:08 np0005603435 python3.9[129790]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833567.4351242-119-192490560609005/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0c10cf6af512b019fb34caf15cbb1942e5ce8c35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:09 np0005603435 python3.9[129942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:09 np0005603435 python3.9[130094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:10 np0005603435 python3.9[130246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:11 np0005603435 python3.9[130369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833570.0599413-178-248920894827055/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4f00355bc6aecd044ef366692750113361d594c6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:11 np0005603435 python3.9[130521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:12 np0005603435 python3.9[130644]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833571.3100069-178-23350844428740/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d35a92561f6c5c92fffe8bac879ed95466756c17 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:12 np0005603435 python3.9[130796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:13 np0005603435 python3.9[130919]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833572.4431431-178-70090198029315/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=75bb5537de7db2586704cf800064493b34e83c1a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:14 np0005603435 python3.9[131071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:15 np0005603435 python3.9[131223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:15 np0005603435 python3.9[131346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833574.8368735-246-23818148760321/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0b7af9532dee36953ea3073b7d033057885ae476 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:16 np0005603435 python3.9[131498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:26:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:26:17 np0005603435 python3.9[131650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:18 np0005603435 python3.9[131773]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833576.8611593-270-274811479475704/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0b7af9532dee36953ea3073b7d033057885ae476 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:18 np0005603435 python3.9[131925]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:19 np0005603435 python3.9[132077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:20 np0005603435 python3.9[132200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833579.004313-294-90767762214951/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0b7af9532dee36953ea3073b7d033057885ae476 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:20 np0005603435 python3.9[132352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:21 np0005603435 python3.9[132504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:22 np0005603435 python3.9[132627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833580.986059-318-245090320046837/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0b7af9532dee36953ea3073b7d033057885ae476 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:22 np0005603435 python3.9[132779]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:23 np0005603435 python3.9[132931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:24 np0005603435 python3.9[133054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833583.152239-342-198246712277982/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0b7af9532dee36953ea3073b7d033057885ae476 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:24 np0005603435 python3.9[133206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:25 np0005603435 python3.9[133358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:26 np0005603435 python3.9[133481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833585.1646872-366-90269012096145/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0b7af9532dee36953ea3073b7d033057885ae476 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:26 np0005603435 systemd-logind[816]: Session 44 logged out. Waiting for processes to exit.
Jan 30 23:26:26 np0005603435 systemd[1]: session-44.scope: Deactivated successfully.
Jan 30 23:26:26 np0005603435 systemd[1]: session-44.scope: Consumed 22.989s CPU time.
Jan 30 23:26:26 np0005603435 systemd-logind[816]: Removed session 44.
Jan 30 23:26:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:31 np0005603435 systemd-logind[816]: New session 45 of user zuul.
Jan 30 23:26:31 np0005603435 systemd[1]: Started Session 45 of User zuul.
Jan 30 23:26:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:32 np0005603435 python3.9[133661]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:33 np0005603435 python3.9[133813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:34 np0005603435 python3.9[133936]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833592.8812733-29-240623736448680/.source.conf _original_basename=ceph.conf follow=False checksum=be20d2f1ab5dc0f1d4724ca5159ec46d752d233a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:34 np0005603435 python3.9[134088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:35 np0005603435 python3.9[134211]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833594.4708982-29-219386400257475/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=554c90c74907bf5b649f3d413acf0f1f5c4c4df0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:35 np0005603435 systemd[1]: session-45.scope: Deactivated successfully.
Jan 30 23:26:35 np0005603435 systemd[1]: session-45.scope: Consumed 2.617s CPU time.
Jan 30 23:26:35 np0005603435 systemd-logind[816]: Session 45 logged out. Waiting for processes to exit.
Jan 30 23:26:35 np0005603435 systemd-logind[816]: Removed session 45.
Jan 30 23:26:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:26:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:26:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:26:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:26:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:26:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:26:38 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:26:38 np0005603435 podman[134379]: 2026-01-31 04:26:38.484325361 +0000 UTC m=+0.060566572 container create 5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:26:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:38 np0005603435 systemd[1]: Started libpod-conmon-5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15.scope.
Jan 30 23:26:38 np0005603435 podman[134379]: 2026-01-31 04:26:38.455979326 +0000 UTC m=+0.032220597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:26:38 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:26:38 np0005603435 podman[134379]: 2026-01-31 04:26:38.576890843 +0000 UTC m=+0.153132124 container init 5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:26:38 np0005603435 podman[134379]: 2026-01-31 04:26:38.585286733 +0000 UTC m=+0.161527944 container start 5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:26:38 np0005603435 podman[134379]: 2026-01-31 04:26:38.589077803 +0000 UTC m=+0.165319074 container attach 5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cray, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:26:38 np0005603435 systemd[1]: libpod-5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15.scope: Deactivated successfully.
Jan 30 23:26:38 np0005603435 trusting_cray[134396]: 167 167
Jan 30 23:26:38 np0005603435 conmon[134396]: conmon 5e107a6b28491780cd12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15.scope/container/memory.events
Jan 30 23:26:38 np0005603435 podman[134379]: 2026-01-31 04:26:38.592991037 +0000 UTC m=+0.169232258 container died 5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cray, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 30 23:26:38 np0005603435 systemd[1]: var-lib-containers-storage-overlay-6ca9517f580f70088e26668663e045a633f51e121b07f5f1de3dbd967b0be70c-merged.mount: Deactivated successfully.
Jan 30 23:26:38 np0005603435 podman[134379]: 2026-01-31 04:26:38.643287443 +0000 UTC m=+0.219528654 container remove 5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 30 23:26:38 np0005603435 systemd[1]: libpod-conmon-5e107a6b28491780cd1284a1d685a5e6334c5aeb22eede5811d4627f7cd4db15.scope: Deactivated successfully.
Jan 30 23:26:38 np0005603435 podman[134419]: 2026-01-31 04:26:38.825673293 +0000 UTC m=+0.059220230 container create c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:26:38 np0005603435 systemd[1]: Started libpod-conmon-c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267.scope.
Jan 30 23:26:38 np0005603435 podman[134419]: 2026-01-31 04:26:38.798543468 +0000 UTC m=+0.032090475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:26:38 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:26:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fb9c7047e7550dfcc1fa285e6fc8279a060cfb66b59ef2405f7eced1bba1e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fb9c7047e7550dfcc1fa285e6fc8279a060cfb66b59ef2405f7eced1bba1e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fb9c7047e7550dfcc1fa285e6fc8279a060cfb66b59ef2405f7eced1bba1e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fb9c7047e7550dfcc1fa285e6fc8279a060cfb66b59ef2405f7eced1bba1e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:38 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6fb9c7047e7550dfcc1fa285e6fc8279a060cfb66b59ef2405f7eced1bba1e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:38 np0005603435 podman[134419]: 2026-01-31 04:26:38.928725975 +0000 UTC m=+0.162272892 container init c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:26:38 np0005603435 podman[134419]: 2026-01-31 04:26:38.943757863 +0000 UTC m=+0.177304800 container start c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:26:38 np0005603435 podman[134419]: 2026-01-31 04:26:38.947623915 +0000 UTC m=+0.181170862 container attach c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:26:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:39 np0005603435 happy_khorana[134435]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:26:39 np0005603435 happy_khorana[134435]: --> All data devices are unavailable
Jan 30 23:26:39 np0005603435 systemd[1]: libpod-c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267.scope: Deactivated successfully.
Jan 30 23:26:39 np0005603435 podman[134419]: 2026-01-31 04:26:39.418558591 +0000 UTC m=+0.652105538 container died c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:26:39 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b6fb9c7047e7550dfcc1fa285e6fc8279a060cfb66b59ef2405f7eced1bba1e8-merged.mount: Deactivated successfully.
Jan 30 23:26:39 np0005603435 podman[134419]: 2026-01-31 04:26:39.467076086 +0000 UTC m=+0.700623033 container remove c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:26:39 np0005603435 systemd[1]: libpod-conmon-c2f1b9cdf4911aa4d02681703e2397bdd69f3197057a3de662d09ef424f32267.scope: Deactivated successfully.
Jan 30 23:26:39 np0005603435 podman[134531]: 2026-01-31 04:26:39.971335694 +0000 UTC m=+0.057942059 container create 30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sanderson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:26:40 np0005603435 systemd[1]: Started libpod-conmon-30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d.scope.
Jan 30 23:26:40 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:26:40 np0005603435 podman[134531]: 2026-01-31 04:26:39.946196036 +0000 UTC m=+0.032802461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:26:40 np0005603435 podman[134531]: 2026-01-31 04:26:40.054977725 +0000 UTC m=+0.141584150 container init 30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sanderson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:26:40 np0005603435 podman[134531]: 2026-01-31 04:26:40.06571681 +0000 UTC m=+0.152323185 container start 30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:26:40 np0005603435 podman[134531]: 2026-01-31 04:26:40.06990913 +0000 UTC m=+0.156515555 container attach 30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:26:40 np0005603435 relaxed_sanderson[134547]: 167 167
Jan 30 23:26:40 np0005603435 systemd[1]: libpod-30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d.scope: Deactivated successfully.
Jan 30 23:26:40 np0005603435 conmon[134547]: conmon 30d469f04a76afd4f8da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d.scope/container/memory.events
Jan 30 23:26:40 np0005603435 podman[134531]: 2026-01-31 04:26:40.073348422 +0000 UTC m=+0.159954797 container died 30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 30 23:26:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e21c9e902ed80c02d8482c48084cce9fc6e0924ca5dc61f200a146355f2c6948-merged.mount: Deactivated successfully.
Jan 30 23:26:40 np0005603435 podman[134531]: 2026-01-31 04:26:40.119721905 +0000 UTC m=+0.206328280 container remove 30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_sanderson, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:26:40 np0005603435 systemd[1]: libpod-conmon-30d469f04a76afd4f8da761b4504c60a0997cdc419ad246418f831b8c9019b2d.scope: Deactivated successfully.
Jan 30 23:26:40 np0005603435 podman[134572]: 2026-01-31 04:26:40.305811683 +0000 UTC m=+0.052998982 container create 933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:26:40 np0005603435 systemd[1]: Started libpod-conmon-933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9.scope.
Jan 30 23:26:40 np0005603435 podman[134572]: 2026-01-31 04:26:40.281411583 +0000 UTC m=+0.028598932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:26:40 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:26:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc195c5725043c9681887a009e3e94f396e740aa02f50ba51c227e6845af926/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc195c5725043c9681887a009e3e94f396e740aa02f50ba51c227e6845af926/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc195c5725043c9681887a009e3e94f396e740aa02f50ba51c227e6845af926/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc195c5725043c9681887a009e3e94f396e740aa02f50ba51c227e6845af926/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:40 np0005603435 systemd-logind[816]: New session 46 of user zuul.
Jan 30 23:26:40 np0005603435 systemd[1]: Started Session 46 of User zuul.
Jan 30 23:26:40 np0005603435 podman[134572]: 2026-01-31 04:26:40.414034089 +0000 UTC m=+0.161221408 container init 933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:26:40 np0005603435 podman[134572]: 2026-01-31 04:26:40.42420206 +0000 UTC m=+0.171389359 container start 933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:26:40 np0005603435 podman[134572]: 2026-01-31 04:26:40.427805976 +0000 UTC m=+0.174993275 container attach 933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:26:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]: {
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:    "0": [
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:        {
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "devices": [
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "/dev/loop3"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            ],
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_name": "ceph_lv0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_size": "21470642176",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "name": "ceph_lv0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "tags": {
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cluster_name": "ceph",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.crush_device_class": "",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.encrypted": "0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.objectstore": "bluestore",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osd_id": "0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.type": "block",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.vdo": "0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.with_tpm": "0"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            },
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "type": "block",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "vg_name": "ceph_vg0"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:        }
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:    ],
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:    "1": [
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:        {
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "devices": [
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "/dev/loop4"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            ],
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_name": "ceph_lv1",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_size": "21470642176",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "name": "ceph_lv1",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "tags": {
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cluster_name": "ceph",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.crush_device_class": "",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.encrypted": "0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.objectstore": "bluestore",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osd_id": "1",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.type": "block",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.vdo": "0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.with_tpm": "0"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            },
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "type": "block",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "vg_name": "ceph_vg1"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:        }
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:    ],
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:    "2": [
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:        {
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "devices": [
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "/dev/loop5"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            ],
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_name": "ceph_lv2",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_size": "21470642176",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "name": "ceph_lv2",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "tags": {
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.cluster_name": "ceph",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.crush_device_class": "",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.encrypted": "0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.objectstore": "bluestore",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osd_id": "2",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.type": "block",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.vdo": "0",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:                "ceph.with_tpm": "0"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            },
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "type": "block",
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:            "vg_name": "ceph_vg2"
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:        }
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]:    ]
Jan 30 23:26:40 np0005603435 fervent_jackson[134590]: }
Jan 30 23:26:40 np0005603435 systemd[1]: libpod-933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9.scope: Deactivated successfully.
Jan 30 23:26:40 np0005603435 podman[134572]: 2026-01-31 04:26:40.758779092 +0000 UTC m=+0.505966381 container died 933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:26:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0fc195c5725043c9681887a009e3e94f396e740aa02f50ba51c227e6845af926-merged.mount: Deactivated successfully.
Jan 30 23:26:40 np0005603435 podman[134572]: 2026-01-31 04:26:40.811552378 +0000 UTC m=+0.558739667 container remove 933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:26:40 np0005603435 systemd[1]: libpod-conmon-933c1defcb48286cf2e2ee7baaaa7e1387c815a41b4e1f9f0b470d4e597c62a9.scope: Deactivated successfully.
Jan 30 23:26:41 np0005603435 podman[134826]: 2026-01-31 04:26:41.288409043 +0000 UTC m=+0.050754298 container create dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Jan 30 23:26:41 np0005603435 systemd[1]: Started libpod-conmon-dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5.scope.
Jan 30 23:26:41 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:26:41 np0005603435 podman[134826]: 2026-01-31 04:26:41.265009847 +0000 UTC m=+0.027355162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:26:41 np0005603435 podman[134826]: 2026-01-31 04:26:41.374892091 +0000 UTC m=+0.137237396 container init dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:26:41 np0005603435 podman[134826]: 2026-01-31 04:26:41.382380869 +0000 UTC m=+0.144726124 container start dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:26:41 np0005603435 podman[134826]: 2026-01-31 04:26:41.38619612 +0000 UTC m=+0.148541385 container attach dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:26:41 np0005603435 relaxed_lederberg[134842]: 167 167
Jan 30 23:26:41 np0005603435 systemd[1]: libpod-dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5.scope: Deactivated successfully.
Jan 30 23:26:41 np0005603435 podman[134826]: 2026-01-31 04:26:41.388899175 +0000 UTC m=+0.151244440 container died dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:26:41 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0ce53d9ddeecac950c9c569795ea780897847ec4b6dc520bf18c484f5dacae15-merged.mount: Deactivated successfully.
Jan 30 23:26:41 np0005603435 python3.9[134813]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:26:41 np0005603435 podman[134826]: 2026-01-31 04:26:41.425719901 +0000 UTC m=+0.188065166 container remove dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:26:41 np0005603435 systemd[1]: libpod-conmon-dabc40eec3c9c43bfdc1d86589d6745c7ae0b05117c71262ae9bbcf50e681ba5.scope: Deactivated successfully.
Jan 30 23:26:41 np0005603435 podman[134870]: 2026-01-31 04:26:41.587326086 +0000 UTC m=+0.043477315 container create 2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:26:41 np0005603435 systemd[1]: Started libpod-conmon-2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b.scope.
Jan 30 23:26:41 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:26:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1252d7f3e06a85daa7a4b212c9b5e0c3c9c2d6781747c6f7d2b763c80698e229/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1252d7f3e06a85daa7a4b212c9b5e0c3c9c2d6781747c6f7d2b763c80698e229/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1252d7f3e06a85daa7a4b212c9b5e0c3c9c2d6781747c6f7d2b763c80698e229/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1252d7f3e06a85daa7a4b212c9b5e0c3c9c2d6781747c6f7d2b763c80698e229/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:26:41 np0005603435 podman[134870]: 2026-01-31 04:26:41.567858473 +0000 UTC m=+0.024009772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:26:41 np0005603435 podman[134870]: 2026-01-31 04:26:41.686126797 +0000 UTC m=+0.142278056 container init 2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:26:41 np0005603435 podman[134870]: 2026-01-31 04:26:41.70262108 +0000 UTC m=+0.158772329 container start 2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:26:41 np0005603435 podman[134870]: 2026-01-31 04:26:41.706751128 +0000 UTC m=+0.162902427 container attach 2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:26:42 np0005603435 lvm[135118]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:26:42 np0005603435 lvm[135117]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:26:42 np0005603435 lvm[135117]: VG ceph_vg0 finished
Jan 30 23:26:42 np0005603435 lvm[135118]: VG ceph_vg1 finished
Jan 30 23:26:42 np0005603435 lvm[135120]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:26:42 np0005603435 lvm[135120]: VG ceph_vg2 finished
Jan 30 23:26:42 np0005603435 angry_brattain[134887]: {}
Jan 30 23:26:42 np0005603435 python3.9[135112]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:42 np0005603435 systemd[1]: libpod-2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b.scope: Deactivated successfully.
Jan 30 23:26:42 np0005603435 podman[134870]: 2026-01-31 04:26:42.479745031 +0000 UTC m=+0.935896280 container died 2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brattain, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:26:42 np0005603435 systemd[1]: libpod-2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b.scope: Consumed 1.175s CPU time.
Jan 30 23:26:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:42 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1252d7f3e06a85daa7a4b212c9b5e0c3c9c2d6781747c6f7d2b763c80698e229-merged.mount: Deactivated successfully.
Jan 30 23:26:42 np0005603435 podman[134870]: 2026-01-31 04:26:42.528264476 +0000 UTC m=+0.984415725 container remove 2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:26:42 np0005603435 systemd[1]: libpod-conmon-2c16d54b5eba5f94d62a0e1ffc6e806c6e93d1d78adea84dba696b51b2599f9b.scope: Deactivated successfully.
Jan 30 23:26:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:26:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:26:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:26:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:26:43 np0005603435 python3.9[135312]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:26:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:26:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:26:43 np0005603435 python3.9[135462]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:26:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:44 np0005603435 python3.9[135614]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 30 23:26:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:46 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 30 23:26:47 np0005603435 python3.9[135770]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:26:48 np0005603435 python3.9[135854]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:26:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:50 np0005603435 python3.9[136007]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:26:51 np0005603435 python3[136162]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 30 23:26:52 np0005603435 python3.9[136314]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:53 np0005603435 python3.9[136466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:53 np0005603435 python3.9[136544]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:54 np0005603435 python3.9[136696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:54 np0005603435 python3.9[136774]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.w3s7ty_2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:55 np0005603435 python3.9[136926]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:56 np0005603435 python3.9[137004]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:56 np0005603435 python3.9[137156]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:26:57 np0005603435 python3[137309]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 30 23:26:58 np0005603435 python3.9[137461]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:26:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:26:59 np0005603435 python3.9[137586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833617.8960645-152-252799412091865/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.159149) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833619159207, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1728, "num_deletes": 252, "total_data_size": 2449693, "memory_usage": 2497384, "flush_reason": "Manual Compaction"}
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833619171745, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1443480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7394, "largest_seqno": 9121, "table_properties": {"data_size": 1437840, "index_size": 2523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16598, "raw_average_key_size": 20, "raw_value_size": 1424428, "raw_average_value_size": 1782, "num_data_blocks": 119, "num_entries": 799, "num_filter_entries": 799, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833465, "oldest_key_time": 1769833465, "file_creation_time": 1769833619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 12662 microseconds, and 5218 cpu microseconds.
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.171814) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1443480 bytes OK
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.171839) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.173746) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.173768) EVENT_LOG_v1 {"time_micros": 1769833619173761, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.173791) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2442025, prev total WAL file size 2442025, number of live WAL files 2.
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.174642) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1409KB)], [20(7514KB)]
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833619174740, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9138110, "oldest_snapshot_seqno": -1}
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3451 keys, 7155658 bytes, temperature: kUnknown
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833619218365, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7155658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7129138, "index_size": 16810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 82521, "raw_average_key_size": 23, "raw_value_size": 7063385, "raw_average_value_size": 2046, "num_data_blocks": 744, "num_entries": 3451, "num_filter_entries": 3451, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769833619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.218586) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7155658 bytes
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.220160) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.2 rd, 163.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.3 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.3) write-amplify(5.0) OK, records in: 3895, records dropped: 444 output_compression: NoCompression
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.220190) EVENT_LOG_v1 {"time_micros": 1769833619220176, "job": 6, "event": "compaction_finished", "compaction_time_micros": 43679, "compaction_time_cpu_micros": 24223, "output_level": 6, "num_output_files": 1, "total_output_size": 7155658, "num_input_records": 3895, "num_output_records": 3451, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833619220508, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833619221527, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.174509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.221639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.221650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.221653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.221657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:26:59 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:26:59.221660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:26:59 np0005603435 python3.9[137738]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:00 np0005603435 python3.9[137863]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833619.2875216-167-19004021967425/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:01 np0005603435 python3.9[138015]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:01 np0005603435 python3.9[138140]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833620.6667223-182-8629639094617/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:02 np0005603435 python3.9[138292]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:03 np0005603435 python3.9[138417]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833621.955158-197-212022380526999/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:03 np0005603435 python3.9[138569]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:04 np0005603435 python3.9[138694]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833623.2739713-212-189665961306436/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:05 np0005603435 python3.9[138846]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:05 np0005603435 python3.9[138998]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:27:06
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'backups', '.rgw.root']
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:06 np0005603435 python3.9[139153]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:27:06 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:27:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:27:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:27:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:27:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:27:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:27:07 np0005603435 python3.9[139305]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:08 np0005603435 python3.9[139458]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:27:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:08 np0005603435 python3.9[139612]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:09 np0005603435 python3.9[139767]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:10 np0005603435 python3.9[139917]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:27:11 np0005603435 python3.9[140070]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:11 np0005603435 ovs-vsctl[140071]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 30 23:27:12 np0005603435 python3.9[140223]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:13 np0005603435 python3.9[140378]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:13 np0005603435 ovs-vsctl[140379]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 30 23:27:13 np0005603435 python3.9[140529]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:27:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:14 np0005603435 python3.9[140683]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:27:15 np0005603435 python3.9[140835]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:15 np0005603435 python3.9[140913]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:27:16 np0005603435 python3.9[141065]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:27:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:27:16 np0005603435 python3.9[141143]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:27:17 np0005603435 python3.9[141295]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:18 np0005603435 python3.9[141447]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:18 np0005603435 python3.9[141525]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:19 np0005603435 python3.9[141677]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:19 np0005603435 python3.9[141755]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:20 np0005603435 python3.9[141907]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:27:20 np0005603435 systemd[1]: Reloading.
Jan 30 23:27:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:20 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:27:20 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:27:21 np0005603435 python3.9[142097]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:21 np0005603435 python3.9[142175]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:22 np0005603435 python3.9[142327]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:23 np0005603435 python3.9[142405]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:24 np0005603435 python3.9[142557]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:27:24 np0005603435 systemd[1]: Reloading.
Jan 30 23:27:24 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:27:24 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:27:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:24 np0005603435 systemd[1]: Starting Create netns directory...
Jan 30 23:27:24 np0005603435 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 30 23:27:24 np0005603435 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 30 23:27:24 np0005603435 systemd[1]: Finished Create netns directory.
Jan 30 23:27:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:25 np0005603435 python3.9[142750]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:27:26 np0005603435 python3.9[142902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:26 np0005603435 python3.9[143025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769833645.4880004-463-175079058841410/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:27:27 np0005603435 python3.9[143177]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:28 np0005603435 python3.9[143329]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:27:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:28 np0005603435 python3.9[143481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:29 np0005603435 python3.9[143604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833648.4315703-496-150996242066610/.source.json _original_basename=.o7tyys0g follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:30 np0005603435 python3.9[143754]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:32 np0005603435 python3.9[144177]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 30 23:27:33 np0005603435 python3.9[144329]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 30 23:27:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:34 np0005603435 python3[144481]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 30 23:27:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:27:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:27:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:27:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:27:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:27:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:27:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:40 np0005603435 podman[144496]: 2026-01-31 04:27:40.011187157 +0000 UTC m=+5.300124743 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 30 23:27:40 np0005603435 podman[144616]: 2026-01-31 04:27:40.126540952 +0000 UTC m=+0.042403578 container create f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:27:40 np0005603435 podman[144616]: 2026-01-31 04:27:40.104588501 +0000 UTC m=+0.020451167 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 30 23:27:40 np0005603435 python3[144481]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=d5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 30 23:27:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:40 np0005603435 python3.9[144805]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:27:41 np0005603435 python3.9[144959]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:42 np0005603435 python3.9[145035]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:27:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:42 np0005603435 python3.9[145198]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833662.1938224-574-268572387759989/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:27:43 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:27:43 np0005603435 python3.9[145332]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 30 23:27:43 np0005603435 systemd[1]: Reloading.
Jan 30 23:27:43 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:27:43 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:27:43 np0005603435 podman[145442]: 2026-01-31 04:27:43.675322744 +0000 UTC m=+0.040124344 container create 4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:27:43 np0005603435 podman[145442]: 2026-01-31 04:27:43.657256294 +0000 UTC m=+0.022057984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:27:43 np0005603435 systemd[1]: Started libpod-conmon-4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748.scope.
Jan 30 23:27:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:27:43 np0005603435 podman[145442]: 2026-01-31 04:27:43.817563355 +0000 UTC m=+0.182364975 container init 4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:27:43 np0005603435 podman[145442]: 2026-01-31 04:27:43.823330849 +0000 UTC m=+0.188132479 container start 4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:27:43 np0005603435 stupefied_turing[145458]: 167 167
Jan 30 23:27:43 np0005603435 systemd[1]: libpod-4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748.scope: Deactivated successfully.
Jan 30 23:27:43 np0005603435 conmon[145458]: conmon 4c7c98ee07ba7d0cb624 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748.scope/container/memory.events
Jan 30 23:27:43 np0005603435 podman[145442]: 2026-01-31 04:27:43.827358763 +0000 UTC m=+0.192160383 container attach 4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:27:43 np0005603435 podman[145442]: 2026-01-31 04:27:43.827990108 +0000 UTC m=+0.192791708 container died 4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:27:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0f8ad3a7d7933342c579ef7f6741300134a7a50b8ec7d4ed7997306420370538-merged.mount: Deactivated successfully.
Jan 30 23:27:43 np0005603435 podman[145442]: 2026-01-31 04:27:43.864989689 +0000 UTC m=+0.229791289 container remove 4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:27:43 np0005603435 systemd[1]: libpod-conmon-4c7c98ee07ba7d0cb624c7ebb06de91d3ac3c0cad9b7ad248bf4d1873ee0d748.scope: Deactivated successfully.
Jan 30 23:27:44 np0005603435 podman[145506]: 2026-01-31 04:27:44.006563184 +0000 UTC m=+0.044032456 container create 273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 30 23:27:44 np0005603435 systemd[1]: Started libpod-conmon-273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8.scope.
Jan 30 23:27:44 np0005603435 podman[145506]: 2026-01-31 04:27:43.982952144 +0000 UTC m=+0.020421456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:27:44 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:27:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd1905c36efd3bcf9833bed208720fa01d22143b2be7cdd056d7674dad91425/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd1905c36efd3bcf9833bed208720fa01d22143b2be7cdd056d7674dad91425/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd1905c36efd3bcf9833bed208720fa01d22143b2be7cdd056d7674dad91425/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd1905c36efd3bcf9833bed208720fa01d22143b2be7cdd056d7674dad91425/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd1905c36efd3bcf9833bed208720fa01d22143b2be7cdd056d7674dad91425/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:44 np0005603435 podman[145506]: 2026-01-31 04:27:44.109832517 +0000 UTC m=+0.147301779 container init 273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:27:44 np0005603435 podman[145506]: 2026-01-31 04:27:44.124773855 +0000 UTC m=+0.162243117 container start 273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:27:44 np0005603435 podman[145506]: 2026-01-31 04:27:44.129175127 +0000 UTC m=+0.166644409 container attach 273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:27:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:44 np0005603435 python3.9[145578]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:27:44 np0005603435 systemd[1]: Reloading.
Jan 30 23:27:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:44 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:27:44 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:27:44 np0005603435 strange_easley[145546]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:27:44 np0005603435 strange_easley[145546]: --> All data devices are unavailable
Jan 30 23:27:44 np0005603435 podman[145632]: 2026-01-31 04:27:44.749402622 +0000 UTC m=+0.044833784 container died 273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_easley, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 30 23:27:44 np0005603435 systemd[1]: libpod-273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8.scope: Deactivated successfully.
Jan 30 23:27:44 np0005603435 systemd[1]: var-lib-containers-storage-overlay-bdd1905c36efd3bcf9833bed208720fa01d22143b2be7cdd056d7674dad91425-merged.mount: Deactivated successfully.
Jan 30 23:27:44 np0005603435 systemd[1]: Starting ovn_controller container...
Jan 30 23:27:44 np0005603435 podman[145632]: 2026-01-31 04:27:44.804708519 +0000 UTC m=+0.100139601 container remove 273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_easley, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:27:44 np0005603435 systemd[1]: libpod-conmon-273c2a1f07f621ca23b5b645b6055bc098b2b82e1e74de59d689b048b15c4de8.scope: Deactivated successfully.
Jan 30 23:27:44 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:27:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230dd533c933be8e8558fc06a526f07b6e0bf0dd0de88abc254d93f03af3c708/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:44 np0005603435 systemd[1]: Started /usr/bin/podman healthcheck run f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d.
Jan 30 23:27:44 np0005603435 podman[145651]: 2026-01-31 04:27:44.960821243 +0000 UTC m=+0.134567293 container init f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 30 23:27:44 np0005603435 ovn_controller[145670]: + sudo -E kolla_set_configs
Jan 30 23:27:44 np0005603435 podman[145651]: 2026-01-31 04:27:44.986452659 +0000 UTC m=+0.160198629 container start f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:27:44 np0005603435 edpm-start-podman-container[145651]: ovn_controller
Jan 30 23:27:45 np0005603435 systemd[1]: Created slice User Slice of UID 0.
Jan 30 23:27:45 np0005603435 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 30 23:27:45 np0005603435 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 30 23:27:45 np0005603435 systemd[1]: Starting User Manager for UID 0...
Jan 30 23:27:45 np0005603435 edpm-start-podman-container[145649]: Creating additional drop-in dependency for "ovn_controller" (f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d)
Jan 30 23:27:45 np0005603435 podman[145716]: 2026-01-31 04:27:45.070146917 +0000 UTC m=+0.074807452 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:27:45 np0005603435 systemd[1]: f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d-40ac7cec62732cb8.service: Main process exited, code=exited, status=1/FAILURE
Jan 30 23:27:45 np0005603435 systemd[1]: f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d-40ac7cec62732cb8.service: Failed with result 'exit-code'.
Jan 30 23:27:45 np0005603435 systemd[1]: Reloading.
Jan 30 23:27:45 np0005603435 systemd[145753]: Queued start job for default target Main User Target.
Jan 30 23:27:45 np0005603435 systemd[145753]: Created slice User Application Slice.
Jan 30 23:27:45 np0005603435 systemd[145753]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 30 23:27:45 np0005603435 systemd[145753]: Started Daily Cleanup of User's Temporary Directories.
Jan 30 23:27:45 np0005603435 systemd[145753]: Reached target Paths.
Jan 30 23:27:45 np0005603435 systemd[145753]: Reached target Timers.
Jan 30 23:27:45 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:27:45 np0005603435 systemd[145753]: Starting D-Bus User Message Bus Socket...
Jan 30 23:27:45 np0005603435 systemd[145753]: Starting Create User's Volatile Files and Directories...
Jan 30 23:27:45 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:27:45 np0005603435 systemd[145753]: Finished Create User's Volatile Files and Directories.
Jan 30 23:27:45 np0005603435 systemd[145753]: Listening on D-Bus User Message Bus Socket.
Jan 30 23:27:45 np0005603435 systemd[145753]: Reached target Sockets.
Jan 30 23:27:45 np0005603435 systemd[145753]: Reached target Basic System.
Jan 30 23:27:45 np0005603435 systemd[145753]: Reached target Main User Target.
Jan 30 23:27:45 np0005603435 systemd[145753]: Startup finished in 187ms.
Jan 30 23:27:45 np0005603435 podman[145825]: 2026-01-31 04:27:45.314690739 +0000 UTC m=+0.039479140 container create 36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:27:45 np0005603435 podman[145825]: 2026-01-31 04:27:45.295935072 +0000 UTC m=+0.020723513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:27:45 np0005603435 systemd[1]: Started User Manager for UID 0.
Jan 30 23:27:45 np0005603435 systemd[1]: Started ovn_controller container.
Jan 30 23:27:45 np0005603435 systemd[1]: Started libpod-conmon-36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e.scope.
Jan 30 23:27:45 np0005603435 systemd[1]: Started Session c1 of User root.
Jan 30 23:27:45 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:27:45 np0005603435 podman[145825]: 2026-01-31 04:27:45.444411987 +0000 UTC m=+0.169200398 container init 36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:27:45 np0005603435 podman[145825]: 2026-01-31 04:27:45.454953002 +0000 UTC m=+0.179741403 container start 36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:27:45 np0005603435 podman[145825]: 2026-01-31 04:27:45.458376492 +0000 UTC m=+0.183164883 container attach 36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:27:45 np0005603435 systemd[1]: libpod-36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e.scope: Deactivated successfully.
Jan 30 23:27:45 np0005603435 optimistic_chatterjee[145842]: 167 167
Jan 30 23:27:45 np0005603435 conmon[145842]: conmon 36df2ab4dab997baa31e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e.scope/container/memory.events
Jan 30 23:27:45 np0005603435 podman[145825]: 2026-01-31 04:27:45.464528475 +0000 UTC m=+0.189316916 container died 36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: INFO:__main__:Validating config file
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: INFO:__main__:Writing out command to execute
Jan 30 23:27:45 np0005603435 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: ++ cat /run_command
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: + ARGS=
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: + sudo kolla_copy_cacerts
Jan 30 23:27:45 np0005603435 podman[145825]: 2026-01-31 04:27:45.497536003 +0000 UTC m=+0.222324404 container remove 36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:27:45 np0005603435 systemd[1]: Started Session c2 of User root.
Jan 30 23:27:45 np0005603435 systemd[1]: libpod-conmon-36df2ab4dab997baa31e75d9d2ec9b086ea60a9e0b9db256ee0147f16846eb5e.scope: Deactivated successfully.
Jan 30 23:27:45 np0005603435 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: + [[ ! -n '' ]]
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: + . kolla_extend_start
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: + umask 0022
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <info>  [1769833665.5646] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <info>  [1769833665.5661] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <warn>  [1769833665.5666] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <info>  [1769833665.5682] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <info>  [1769833665.5693] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <info>  [1769833665.5699] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 30 23:27:45 np0005603435 kernel: br-int: entered promiscuous mode
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00019|main|INFO|OVS feature set changed, force recompute.
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <info>  [1769833665.6024] manager: (ovn-45f4d3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 30 23:27:45 np0005603435 systemd-udevd[145907]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:27:45 np0005603435 kernel: genev_sys_6081: entered promiscuous mode
Jan 30 23:27:45 np0005603435 systemd-udevd[145910]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:27:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:27:45Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <info>  [1769833665.6240] device (genev_sys_6081): carrier: link connected
Jan 30 23:27:45 np0005603435 NetworkManager[49097]: <info>  [1769833665.6249] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 30 23:27:45 np0005603435 podman[145899]: 2026-01-31 04:27:45.661353266 +0000 UTC m=+0.054051469 container create 288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:27:45 np0005603435 systemd[1]: Started libpod-conmon-288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650.scope.
Jan 30 23:27:45 np0005603435 podman[145899]: 2026-01-31 04:27:45.636021076 +0000 UTC m=+0.028719279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:27:45 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:27:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd537db345f73dde194de6ad89b16775324c3e9a7fdca7e5459aa349b9e1790d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd537db345f73dde194de6ad89b16775324c3e9a7fdca7e5459aa349b9e1790d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd537db345f73dde194de6ad89b16775324c3e9a7fdca7e5459aa349b9e1790d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd537db345f73dde194de6ad89b16775324c3e9a7fdca7e5459aa349b9e1790d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:45 np0005603435 podman[145899]: 2026-01-31 04:27:45.783733454 +0000 UTC m=+0.176431617 container init 288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:27:45 np0005603435 systemd[1]: var-lib-containers-storage-overlay-997c0b38c43ecf29f834b8d6dff17eb8ffe878d9de7d97356bbb91b1114b4719-merged.mount: Deactivated successfully.
Jan 30 23:27:45 np0005603435 podman[145899]: 2026-01-31 04:27:45.791525836 +0000 UTC m=+0.184223999 container start 288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:27:45 np0005603435 podman[145899]: 2026-01-31 04:27:45.795956379 +0000 UTC m=+0.188654622 container attach 288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]: {
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:    "0": [
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:        {
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "devices": [
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "/dev/loop3"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            ],
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_name": "ceph_lv0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_size": "21470642176",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "name": "ceph_lv0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "tags": {
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cluster_name": "ceph",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.crush_device_class": "",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.encrypted": "0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.objectstore": "bluestore",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osd_id": "0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.type": "block",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.vdo": "0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.with_tpm": "0"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            },
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "type": "block",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "vg_name": "ceph_vg0"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:        }
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:    ],
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:    "1": [
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:        {
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "devices": [
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "/dev/loop4"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            ],
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_name": "ceph_lv1",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_size": "21470642176",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "name": "ceph_lv1",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "tags": {
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cluster_name": "ceph",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.crush_device_class": "",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.encrypted": "0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.objectstore": "bluestore",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osd_id": "1",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.type": "block",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.vdo": "0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.with_tpm": "0"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            },
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "type": "block",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "vg_name": "ceph_vg1"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:        }
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:    ],
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:    "2": [
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:        {
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "devices": [
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "/dev/loop5"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            ],
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_name": "ceph_lv2",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_size": "21470642176",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "name": "ceph_lv2",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "tags": {
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.cluster_name": "ceph",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.crush_device_class": "",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.encrypted": "0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.objectstore": "bluestore",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osd_id": "2",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.type": "block",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.vdo": "0",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:                "ceph.with_tpm": "0"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            },
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "type": "block",
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:            "vg_name": "ceph_vg2"
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:        }
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]:    ]
Jan 30 23:27:46 np0005603435 amazing_hofstadter[145945]: }
Jan 30 23:27:46 np0005603435 systemd[1]: libpod-288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650.scope: Deactivated successfully.
Jan 30 23:27:46 np0005603435 podman[145899]: 2026-01-31 04:27:46.111674867 +0000 UTC m=+0.504373050 container died 288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 30 23:27:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-bd537db345f73dde194de6ad89b16775324c3e9a7fdca7e5459aa349b9e1790d-merged.mount: Deactivated successfully.
Jan 30 23:27:46 np0005603435 podman[145899]: 2026-01-31 04:27:46.164920756 +0000 UTC m=+0.557618949 container remove 288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:27:46 np0005603435 systemd[1]: libpod-conmon-288752c5e00c486b6ae7390f1ffc4965ea64a42c73d37f959b5a9304e458a650.scope: Deactivated successfully.
Jan 30 23:27:46 np0005603435 python3.9[146063]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 30 23:27:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:46 np0005603435 podman[146155]: 2026-01-31 04:27:46.683316021 +0000 UTC m=+0.110772509 container create a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_joliot, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:27:46 np0005603435 podman[146155]: 2026-01-31 04:27:46.598804634 +0000 UTC m=+0.026261122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:27:46 np0005603435 systemd[1]: Started libpod-conmon-a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d.scope.
Jan 30 23:27:46 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:27:46 np0005603435 podman[146155]: 2026-01-31 04:27:46.785201412 +0000 UTC m=+0.212657960 container init a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_joliot, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:27:46 np0005603435 podman[146155]: 2026-01-31 04:27:46.793747811 +0000 UTC m=+0.221204289 container start a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:27:46 np0005603435 podman[146155]: 2026-01-31 04:27:46.79796486 +0000 UTC m=+0.225421368 container attach a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 30 23:27:46 np0005603435 vigilant_joliot[146172]: 167 167
Jan 30 23:27:46 np0005603435 systemd[1]: libpod-a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d.scope: Deactivated successfully.
Jan 30 23:27:46 np0005603435 podman[146155]: 2026-01-31 04:27:46.800295804 +0000 UTC m=+0.227752322 container died a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_joliot, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:27:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ba14b5e41a74f413df7723004ad1b6471d330abf9a4d8d5fc48e68e2e25ffe2b-merged.mount: Deactivated successfully.
Jan 30 23:27:46 np0005603435 podman[146155]: 2026-01-31 04:27:46.846475629 +0000 UTC m=+0.273932117 container remove a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_joliot, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:27:46 np0005603435 systemd[1]: libpod-conmon-a3148e36868265094b6c9761a209738ab805f4d41313f949a94c8661ec22391d.scope: Deactivated successfully.
Jan 30 23:27:46 np0005603435 podman[146268]: 2026-01-31 04:27:46.999210943 +0000 UTC m=+0.050997958 container create 9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:27:47 np0005603435 systemd[1]: Started libpod-conmon-9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca.scope.
Jan 30 23:27:47 np0005603435 podman[146268]: 2026-01-31 04:27:46.97501691 +0000 UTC m=+0.026803945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:27:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:27:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50c51d60e653adae7c8368631a84947d5408dc45ccda41d8b03853752fc884e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50c51d60e653adae7c8368631a84947d5408dc45ccda41d8b03853752fc884e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50c51d60e653adae7c8368631a84947d5408dc45ccda41d8b03853752fc884e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50c51d60e653adae7c8368631a84947d5408dc45ccda41d8b03853752fc884e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:27:47 np0005603435 podman[146268]: 2026-01-31 04:27:47.100382938 +0000 UTC m=+0.152169993 container init 9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:27:47 np0005603435 podman[146268]: 2026-01-31 04:27:47.112587232 +0000 UTC m=+0.164374277 container start 9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:27:47 np0005603435 podman[146268]: 2026-01-31 04:27:47.11724067 +0000 UTC m=+0.169027715 container attach 9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:27:47 np0005603435 python3.9[146345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:27:47 np0005603435 lvm[146539]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:27:47 np0005603435 lvm[146539]: VG ceph_vg0 finished
Jan 30 23:27:47 np0005603435 lvm[146541]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:27:47 np0005603435 lvm[146541]: VG ceph_vg1 finished
Jan 30 23:27:48 np0005603435 lvm[146542]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:27:48 np0005603435 lvm[146542]: VG ceph_vg2 finished
Jan 30 23:27:48 np0005603435 python3.9[146533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833666.7991862-619-275712981750817/.source.yaml _original_basename=.w6cisiut follow=False checksum=018609c400a234423cd13f1ae6bb35fdd40edd7e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:27:48 np0005603435 wonderful_easley[146291]: {}
Jan 30 23:27:48 np0005603435 systemd[1]: libpod-9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca.scope: Deactivated successfully.
Jan 30 23:27:48 np0005603435 systemd[1]: libpod-9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca.scope: Consumed 1.517s CPU time.
Jan 30 23:27:48 np0005603435 podman[146268]: 2026-01-31 04:27:48.140091756 +0000 UTC m=+1.191878801 container died 9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:27:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a50c51d60e653adae7c8368631a84947d5408dc45ccda41d8b03853752fc884e-merged.mount: Deactivated successfully.
Jan 30 23:27:48 np0005603435 podman[146268]: 2026-01-31 04:27:48.193938789 +0000 UTC m=+1.245725804 container remove 9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 30 23:27:48 np0005603435 systemd[1]: libpod-conmon-9c5b698eb5e6852b13b9b1500c2b2d62086dcd5a53d21c192c881836a66613ca.scope: Deactivated successfully.
Jan 30 23:27:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:27:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:27:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:27:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:27:48 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:27:48 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:27:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:27:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 2114 writes, 9351 keys, 2114 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2114 writes, 2114 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2114 writes, 9351 keys, 2114 commit groups, 1.0 writes per commit group, ingest: 12.27 MB, 0.02 MB/s#012Interval WAL: 2114 writes, 2114 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    103.1      0.08              0.02         3    0.028       0      0       0.0       0.0#012  L6      1/0    6.82 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    183.5    161.4      0.09              0.04         2    0.044    7280    733       0.0       0.0#012 Sum      1/0    6.82 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     93.2    132.7      0.17              0.06         5    0.035    7280    733       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    110.5    157.0      0.15              0.06         4    0.036    7280    733       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    183.5    161.4      0.09              0.04         2    0.044    7280    733       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    150.3      0.06              0.02         2    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.1      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5573585118d0#2 capacity: 308.00 MB usage: 628.14 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,536.11 KB,0.169982%) FilterBlock(6,28.86 KB,0.00915032%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 30 23:27:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:48 np0005603435 python3.9[146733]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:48 np0005603435 ovs-vsctl[146734]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 30 23:27:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:49 np0005603435 python3.9[146886]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:49 np0005603435 ovs-vsctl[146888]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 30 23:27:50 np0005603435 python3.9[147041]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:27:50 np0005603435 ovs-vsctl[147042]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 30 23:27:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:50 np0005603435 systemd-logind[816]: Session 46 logged out. Waiting for processes to exit.
Jan 30 23:27:50 np0005603435 systemd[1]: session-46.scope: Deactivated successfully.
Jan 30 23:27:50 np0005603435 systemd[1]: session-46.scope: Consumed 57.161s CPU time.
Jan 30 23:27:50 np0005603435 systemd-logind[816]: Removed session 46.
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.493929) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833671494029, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 650, "num_deletes": 251, "total_data_size": 796838, "memory_usage": 809688, "flush_reason": "Manual Compaction"}
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833671519525, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 790063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9122, "largest_seqno": 9771, "table_properties": {"data_size": 786629, "index_size": 1342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7529, "raw_average_key_size": 18, "raw_value_size": 779726, "raw_average_value_size": 1911, "num_data_blocks": 62, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833620, "oldest_key_time": 1769833620, "file_creation_time": 1769833671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 25624 microseconds, and 4068 cpu microseconds.
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.519571) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 790063 bytes OK
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.519596) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.544814) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.544839) EVENT_LOG_v1 {"time_micros": 1769833671544826, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.544864) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 793381, prev total WAL file size 793381, number of live WAL files 2.
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.545498) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(771KB)], [23(6987KB)]
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833671545596, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7945721, "oldest_snapshot_seqno": -1}
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3345 keys, 6114644 bytes, temperature: kUnknown
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833671609912, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6114644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6090453, "index_size": 14738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81125, "raw_average_key_size": 24, "raw_value_size": 6028136, "raw_average_value_size": 1802, "num_data_blocks": 641, "num_entries": 3345, "num_filter_entries": 3345, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769833671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.610319) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6114644 bytes
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.630841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.4 rd, 94.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.8 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(17.8) write-amplify(7.7) OK, records in: 3859, records dropped: 514 output_compression: NoCompression
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.630883) EVENT_LOG_v1 {"time_micros": 1769833671630863, "job": 8, "event": "compaction_finished", "compaction_time_micros": 64408, "compaction_time_cpu_micros": 19649, "output_level": 6, "num_output_files": 1, "total_output_size": 6114644, "num_input_records": 3859, "num_output_records": 3345, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833671631182, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833671632513, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.545351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.632645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.632653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.632655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.632656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:27:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:27:51.632658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:27:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:55 np0005603435 systemd[1]: Stopping User Manager for UID 0...
Jan 30 23:27:55 np0005603435 systemd[145753]: Activating special unit Exit the Session...
Jan 30 23:27:55 np0005603435 systemd[145753]: Stopped target Main User Target.
Jan 30 23:27:55 np0005603435 systemd[145753]: Stopped target Basic System.
Jan 30 23:27:55 np0005603435 systemd[145753]: Stopped target Paths.
Jan 30 23:27:55 np0005603435 systemd[145753]: Stopped target Sockets.
Jan 30 23:27:55 np0005603435 systemd[145753]: Stopped target Timers.
Jan 30 23:27:55 np0005603435 systemd[145753]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 30 23:27:55 np0005603435 systemd[145753]: Closed D-Bus User Message Bus Socket.
Jan 30 23:27:55 np0005603435 systemd[145753]: Stopped Create User's Volatile Files and Directories.
Jan 30 23:27:55 np0005603435 systemd[145753]: Removed slice User Application Slice.
Jan 30 23:27:55 np0005603435 systemd[145753]: Reached target Shutdown.
Jan 30 23:27:55 np0005603435 systemd[145753]: Finished Exit the Session.
Jan 30 23:27:55 np0005603435 systemd[145753]: Reached target Exit the Session.
Jan 30 23:27:55 np0005603435 systemd[1]: user@0.service: Deactivated successfully.
Jan 30 23:27:55 np0005603435 systemd[1]: Stopped User Manager for UID 0.
Jan 30 23:27:55 np0005603435 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 30 23:27:55 np0005603435 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 30 23:27:55 np0005603435 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 30 23:27:55 np0005603435 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 30 23:27:55 np0005603435 systemd[1]: Removed slice User Slice of UID 0.
Jan 30 23:27:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:57 np0005603435 systemd-logind[816]: New session 48 of user zuul.
Jan 30 23:27:57 np0005603435 systemd[1]: Started Session 48 of User zuul.
Jan 30 23:27:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:27:58 np0005603435 python3.9[147223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:27:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:27:59 np0005603435 python3.9[147379]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:00 np0005603435 python3.9[147531]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:01 np0005603435 python3.9[147683]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:01 np0005603435 python3.9[147835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:02 np0005603435 python3.9[147987]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:03 np0005603435 python3.9[148137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:28:04 np0005603435 python3.9[148290]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 30 23:28:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:05 np0005603435 python3.9[148440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:05 np0005603435 python3.9[148561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769833684.6866949-81-35625973667431/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:28:06
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:06 np0005603435 python3.9[148711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:28:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:28:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:28:07 np0005603435 python3.9[148832]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769833686.179226-96-46381208842585/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:08 np0005603435 python3.9[148984]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:28:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:09 np0005603435 python3.9[149068]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:28:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:11 np0005603435 python3.9[149221]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:28:11 np0005603435 python3.9[149374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:12 np0005603435 python3.9[149495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769833691.5212293-133-220169272879265/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:13 np0005603435 python3.9[149645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:13 np0005603435 python3.9[149766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769833692.7135193-133-276542121538380/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:15 np0005603435 python3.9[149916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:15 np0005603435 ovn_controller[145670]: 2026-01-31T04:28:15Z|00025|memory|INFO|16256 kB peak resident set size after 29.9 seconds
Jan 30 23:28:15 np0005603435 ovn_controller[145670]: 2026-01-31T04:28:15Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 30 23:28:15 np0005603435 podman[150011]: 2026-01-31 04:28:15.488790498 +0000 UTC m=+0.113340923 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 30 23:28:15 np0005603435 python3.9[150048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769833694.5096502-177-179357802949/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:16 np0005603435 python3.9[150213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:16 np0005603435 python3.9[150334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769833695.7600129-177-113181922559997/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:28:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:28:17 np0005603435 python3.9[150484]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:28:18 np0005603435 python3.9[150638]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:18 np0005603435 python3.9[150790]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:19 np0005603435 python3.9[150868]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:19 np0005603435 python3.9[151020]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:20 np0005603435 python3.9[151098]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:20 np0005603435 python3.9[151250]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:21 np0005603435 python3.9[151402]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:22 np0005603435 python3.9[151480]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:22 np0005603435 python3.9[151632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:23 np0005603435 python3.9[151710]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:24 np0005603435 python3.9[151862]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:28:24 np0005603435 systemd[1]: Reloading.
Jan 30 23:28:24 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:28:24 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:28:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:25 np0005603435 python3.9[152052]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:25 np0005603435 python3.9[152130]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:26 np0005603435 python3.9[152282]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:26 np0005603435 python3.9[152360]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:27 np0005603435 python3.9[152512]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:28:27 np0005603435 systemd[1]: Reloading.
Jan 30 23:28:27 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:28:27 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:28:27 np0005603435 systemd[1]: Starting Create netns directory...
Jan 30 23:28:27 np0005603435 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 30 23:28:27 np0005603435 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 30 23:28:27 np0005603435 systemd[1]: Finished Create netns directory.
Jan 30 23:28:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:28 np0005603435 python3.9[152705]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:29 np0005603435 python3.9[152857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:30 np0005603435 python3.9[152980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769833708.9232118-328-158602601490495/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:30 np0005603435 python3.9[153132]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:31 np0005603435 python3.9[153284]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:28:32 np0005603435 python3.9[153436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:32 np0005603435 python3.9[153559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833711.729067-361-98420428767556/.source.json _original_basename=.s_hx367p follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:33 np0005603435 python3.9[153709]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:35 np0005603435 python3.9[154132]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 30 23:28:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:28:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:28:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:28:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:28:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:28:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:28:38 np0005603435 python3.9[154284]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 30 23:28:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:39 np0005603435 python3[154436]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 30 23:28:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:47 np0005603435 podman[154531]: 2026-01-31 04:28:47.077030766 +0000 UTC m=+1.068684960 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:28:47 np0005603435 podman[154450]: 2026-01-31 04:28:47.640995313 +0000 UTC m=+8.130891152 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:28:47 np0005603435 podman[154600]: 2026-01-31 04:28:47.766610494 +0000 UTC m=+0.024038317 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:28:48 np0005603435 podman[154600]: 2026-01-31 04:28:48.557397728 +0000 UTC m=+0.814825591 container create 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:28:48 np0005603435 python3[154436]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=d5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:28:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:28:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:28:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:49 np0005603435 python3.9[154911]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:49 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:28:50 np0005603435 podman[155140]: 2026-01-31 04:28:50.097822462 +0000 UTC m=+0.059001610 container create 2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 30 23:28:50 np0005603435 systemd[1]: Started libpod-conmon-2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5.scope.
Jan 30 23:28:50 np0005603435 podman[155140]: 2026-01-31 04:28:50.069457428 +0000 UTC m=+0.030636616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:28:50 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:28:50 np0005603435 podman[155140]: 2026-01-31 04:28:50.198295811 +0000 UTC m=+0.159474989 container init 2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:28:50 np0005603435 podman[155140]: 2026-01-31 04:28:50.208097684 +0000 UTC m=+0.169276832 container start 2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dijkstra, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:28:50 np0005603435 podman[155140]: 2026-01-31 04:28:50.219080673 +0000 UTC m=+0.180259891 container attach 2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:28:50 np0005603435 sleepy_dijkstra[155176]: 167 167
Jan 30 23:28:50 np0005603435 systemd[1]: libpod-2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5.scope: Deactivated successfully.
Jan 30 23:28:50 np0005603435 conmon[155176]: conmon 2c7aea3c5f7cc20e4d29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5.scope/container/memory.events
Jan 30 23:28:50 np0005603435 podman[155140]: 2026-01-31 04:28:50.231665819 +0000 UTC m=+0.192844957 container died 2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:28:50 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0f70e625a175bdaf745b559e100cbb2da715784b516f99067c9329731fa16dde-merged.mount: Deactivated successfully.
Jan 30 23:28:50 np0005603435 podman[155140]: 2026-01-31 04:28:50.279387812 +0000 UTC m=+0.240566960 container remove 2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dijkstra, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:28:50 np0005603435 systemd[1]: libpod-conmon-2c7aea3c5f7cc20e4d293088673ba976702eb1502d32416e4dbd7e9cfa3049b5.scope: Deactivated successfully.
Jan 30 23:28:50 np0005603435 python3.9[155173]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:50 np0005603435 podman[155199]: 2026-01-31 04:28:50.467534051 +0000 UTC m=+0.065799924 container create ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_golick, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:28:50 np0005603435 systemd[1]: Started libpod-conmon-ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075.scope.
Jan 30 23:28:50 np0005603435 podman[155199]: 2026-01-31 04:28:50.440096958 +0000 UTC m=+0.038362891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:28:50 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:28:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2c126771825ee727baa073c227d6bd62c9f0f909d352bee4a1abf78caef029/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2c126771825ee727baa073c227d6bd62c9f0f909d352bee4a1abf78caef029/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2c126771825ee727baa073c227d6bd62c9f0f909d352bee4a1abf78caef029/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2c126771825ee727baa073c227d6bd62c9f0f909d352bee4a1abf78caef029/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:50 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2c126771825ee727baa073c227d6bd62c9f0f909d352bee4a1abf78caef029/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:50 np0005603435 podman[155199]: 2026-01-31 04:28:50.575148483 +0000 UTC m=+0.173414376 container init ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_golick, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:28:50 np0005603435 podman[155199]: 2026-01-31 04:28:50.583569244 +0000 UTC m=+0.181835107 container start ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:28:50 np0005603435 podman[155199]: 2026-01-31 04:28:50.588769612 +0000 UTC m=+0.187035565 container attach ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:28:50 np0005603435 python3.9[155296]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:28:51 np0005603435 nice_golick[155257]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:28:51 np0005603435 nice_golick[155257]: --> All data devices are unavailable
Jan 30 23:28:51 np0005603435 systemd[1]: libpod-ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075.scope: Deactivated successfully.
Jan 30 23:28:51 np0005603435 podman[155199]: 2026-01-31 04:28:51.160735411 +0000 UTC m=+0.759001254 container died ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_golick, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:28:51 np0005603435 systemd[1]: var-lib-containers-storage-overlay-bd2c126771825ee727baa073c227d6bd62c9f0f909d352bee4a1abf78caef029-merged.mount: Deactivated successfully.
Jan 30 23:28:51 np0005603435 podman[155199]: 2026-01-31 04:28:51.209044127 +0000 UTC m=+0.807309980 container remove ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_golick, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:28:51 np0005603435 systemd[1]: libpod-conmon-ba741a39be23af5858f0fe8dda2cf70e18aafc98665d0920f7d9081032a5e075.scope: Deactivated successfully.
Jan 30 23:28:51 np0005603435 python3.9[155523]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769833730.8894212-439-32075400290107/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:51 np0005603435 podman[155536]: 2026-01-31 04:28:51.599309233 +0000 UTC m=+0.041726378 container create 976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_williamson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:28:51 np0005603435 systemd[1]: Started libpod-conmon-976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd.scope.
Jan 30 23:28:51 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:28:51 np0005603435 podman[155536]: 2026-01-31 04:28:51.578685025 +0000 UTC m=+0.021102170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:28:51 np0005603435 podman[155536]: 2026-01-31 04:28:51.682613813 +0000 UTC m=+0.125030998 container init 976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:28:51 np0005603435 podman[155536]: 2026-01-31 04:28:51.690391299 +0000 UTC m=+0.132808434 container start 976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:28:51 np0005603435 upbeat_williamson[155575]: 167 167
Jan 30 23:28:51 np0005603435 podman[155536]: 2026-01-31 04:28:51.695422524 +0000 UTC m=+0.137839659 container attach 976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:28:51 np0005603435 systemd[1]: libpod-976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd.scope: Deactivated successfully.
Jan 30 23:28:51 np0005603435 podman[155536]: 2026-01-31 04:28:51.696414196 +0000 UTC m=+0.138831341 container died 976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:28:51 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d7289b0f52f00b3d51338cc8c0135602a9350225cba9aafe45b746e4f14c0686-merged.mount: Deactivated successfully.
Jan 30 23:28:51 np0005603435 podman[155536]: 2026-01-31 04:28:51.738182274 +0000 UTC m=+0.180599409 container remove 976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:28:51 np0005603435 systemd[1]: libpod-conmon-976f3e0a1d9532ec241a5f8b5bbe0660f2ef6029b1019bed5ac74ad870cb97fd.scope: Deactivated successfully.
Jan 30 23:28:51 np0005603435 podman[155651]: 2026-01-31 04:28:51.961213005 +0000 UTC m=+0.080067278 container create 7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wozniak, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:28:52 np0005603435 systemd[1]: Started libpod-conmon-7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d.scope.
Jan 30 23:28:52 np0005603435 podman[155651]: 2026-01-31 04:28:51.9380926 +0000 UTC m=+0.056946913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:28:52 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:28:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac73557c0ac3e40010766b0bdd01023d0fd78907809830e023865c927d46c9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac73557c0ac3e40010766b0bdd01023d0fd78907809830e023865c927d46c9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac73557c0ac3e40010766b0bdd01023d0fd78907809830e023865c927d46c9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac73557c0ac3e40010766b0bdd01023d0fd78907809830e023865c927d46c9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:52 np0005603435 podman[155651]: 2026-01-31 04:28:52.062070773 +0000 UTC m=+0.180925106 container init 7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:28:52 np0005603435 podman[155651]: 2026-01-31 04:28:52.067510307 +0000 UTC m=+0.186364540 container start 7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wozniak, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:28:52 np0005603435 podman[155651]: 2026-01-31 04:28:52.076954121 +0000 UTC m=+0.195808384 container attach 7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:28:52 np0005603435 python3.9[155645]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 30 23:28:52 np0005603435 systemd[1]: Reloading.
Jan 30 23:28:52 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:28:52 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]: {
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:    "0": [
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:        {
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "devices": [
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "/dev/loop3"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            ],
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_name": "ceph_lv0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_size": "21470642176",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "name": "ceph_lv0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "tags": {
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cluster_name": "ceph",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.crush_device_class": "",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.encrypted": "0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.objectstore": "bluestore",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osd_id": "0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.type": "block",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.vdo": "0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.with_tpm": "0"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            },
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "type": "block",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "vg_name": "ceph_vg0"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:        }
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:    ],
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:    "1": [
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:        {
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "devices": [
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "/dev/loop4"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            ],
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_name": "ceph_lv1",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_size": "21470642176",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "name": "ceph_lv1",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "tags": {
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cluster_name": "ceph",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.crush_device_class": "",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.encrypted": "0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.objectstore": "bluestore",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osd_id": "1",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.type": "block",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.vdo": "0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.with_tpm": "0"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            },
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "type": "block",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "vg_name": "ceph_vg1"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:        }
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:    ],
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:    "2": [
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:        {
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "devices": [
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "/dev/loop5"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            ],
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_name": "ceph_lv2",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_size": "21470642176",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "name": "ceph_lv2",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "tags": {
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.cluster_name": "ceph",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.crush_device_class": "",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.encrypted": "0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.objectstore": "bluestore",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osd_id": "2",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.type": "block",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.vdo": "0",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:                "ceph.with_tpm": "0"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            },
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "type": "block",
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:            "vg_name": "ceph_vg2"
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:        }
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]:    ]
Jan 30 23:28:52 np0005603435 nifty_wozniak[155668]: }
Jan 30 23:28:52 np0005603435 podman[155651]: 2026-01-31 04:28:52.354401517 +0000 UTC m=+0.473255750 container died 7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wozniak, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:28:52 np0005603435 systemd[1]: libpod-7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d.scope: Deactivated successfully.
Jan 30 23:28:52 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9ac73557c0ac3e40010766b0bdd01023d0fd78907809830e023865c927d46c9d-merged.mount: Deactivated successfully.
Jan 30 23:28:52 np0005603435 podman[155651]: 2026-01-31 04:28:52.515467062 +0000 UTC m=+0.634321305 container remove 7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wozniak, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:28:52 np0005603435 systemd[1]: libpod-conmon-7f00f427dfcbbbcc907d06ba27bb8e8ffb65b90a56e604ebb50a22ceb829914d.scope: Deactivated successfully.
Jan 30 23:28:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:53 np0005603435 podman[155834]: 2026-01-31 04:28:53.039715428 +0000 UTC m=+0.057781222 container create 5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:28:53 np0005603435 systemd[1]: Started libpod-conmon-5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52.scope.
Jan 30 23:28:53 np0005603435 podman[155834]: 2026-01-31 04:28:53.018395094 +0000 UTC m=+0.036460878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:28:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:28:53 np0005603435 podman[155834]: 2026-01-31 04:28:53.161036771 +0000 UTC m=+0.179102625 container init 5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 30 23:28:53 np0005603435 podman[155834]: 2026-01-31 04:28:53.167928567 +0000 UTC m=+0.185994371 container start 5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:28:53 np0005603435 podman[155834]: 2026-01-31 04:28:53.171858016 +0000 UTC m=+0.189923820 container attach 5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:28:53 np0005603435 sweet_shockley[155878]: 167 167
Jan 30 23:28:53 np0005603435 systemd[1]: libpod-5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52.scope: Deactivated successfully.
Jan 30 23:28:53 np0005603435 podman[155834]: 2026-01-31 04:28:53.172846129 +0000 UTC m=+0.190911933 container died 5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:28:53 np0005603435 systemd[1]: var-lib-containers-storage-overlay-aee641233d0d94c4b91763db5ce8f52fa59f5803280ca0b709b465ab54e9799b-merged.mount: Deactivated successfully.
Jan 30 23:28:53 np0005603435 podman[155834]: 2026-01-31 04:28:53.215251381 +0000 UTC m=+0.233317155 container remove 5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 30 23:28:53 np0005603435 systemd[1]: libpod-conmon-5499da37dcbb15f21b1ce92b6506e6bf40d9d2bd375dcac84e8ff1ca42956e52.scope: Deactivated successfully.
Jan 30 23:28:53 np0005603435 podman[155905]: 2026-01-31 04:28:53.431766043 +0000 UTC m=+0.072639439 container create 83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:28:53 np0005603435 python3.9[155882]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:28:53 np0005603435 systemd[1]: Started libpod-conmon-83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4.scope.
Jan 30 23:28:53 np0005603435 podman[155905]: 2026-01-31 04:28:53.400426742 +0000 UTC m=+0.041300198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:28:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:28:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258e71333e2a430fe1417ecd86582043dbf9c3573c4d7c8f4f1cbd0756f5c49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258e71333e2a430fe1417ecd86582043dbf9c3573c4d7c8f4f1cbd0756f5c49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258e71333e2a430fe1417ecd86582043dbf9c3573c4d7c8f4f1cbd0756f5c49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258e71333e2a430fe1417ecd86582043dbf9c3573c4d7c8f4f1cbd0756f5c49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:53 np0005603435 podman[155905]: 2026-01-31 04:28:53.5457863 +0000 UTC m=+0.186659756 container init 83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chaplygin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:28:53 np0005603435 podman[155905]: 2026-01-31 04:28:53.559026931 +0000 UTC m=+0.199900337 container start 83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chaplygin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 30 23:28:53 np0005603435 systemd[1]: Reloading.
Jan 30 23:28:53 np0005603435 podman[155905]: 2026-01-31 04:28:53.564336151 +0000 UTC m=+0.205209527 container attach 83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:28:53 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:28:53 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:28:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:28:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5645 writes, 25K keys, 5645 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5645 writes, 896 syncs, 6.30 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5645 writes, 25K keys, 5645 commit groups, 1.0 writes per commit group, ingest: 19.02 MB, 0.03 MB/s#012Interval WAL: 5645 writes, 896 syncs, 6.30 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56375c2eb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56375c2eb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 30 23:28:53 np0005603435 systemd[1]: Starting ovn_metadata_agent container...
Jan 30 23:28:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:28:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d4387f2aca2ce047d23137b6c4ce3ce0a7213d8dde9c33785c5a82f35b15a/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4d4387f2aca2ce047d23137b6c4ce3ce0a7213d8dde9c33785c5a82f35b15a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:28:53 np0005603435 systemd[1]: Started /usr/bin/podman healthcheck run 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9.
Jan 30 23:28:53 np0005603435 podman[155976]: 2026-01-31 04:28:53.985194241 +0000 UTC m=+0.119454621 container init 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 30 23:28:53 np0005603435 ovn_metadata_agent[155995]: + sudo -E kolla_set_configs
Jan 30 23:28:54 np0005603435 podman[155976]: 2026-01-31 04:28:54.006163337 +0000 UTC m=+0.140423687 container start 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 30 23:28:54 np0005603435 edpm-start-podman-container[155976]: ovn_metadata_agent
Jan 30 23:28:54 np0005603435 edpm-start-podman-container[155975]: Creating additional drop-in dependency for "ovn_metadata_agent" (7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9)
Jan 30 23:28:54 np0005603435 podman[156020]: 2026-01-31 04:28:54.068191854 +0000 UTC m=+0.048337587 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:28:54 np0005603435 systemd[1]: Reloading.
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Validating config file
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Copying service configuration files
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Writing out command to execute
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: ++ cat /run_command
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: + CMD=neutron-ovn-metadata-agent
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: + ARGS=
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: + sudo kolla_copy_cacerts
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: + [[ ! -n '' ]]
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: + . kolla_extend_start
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: Running command: 'neutron-ovn-metadata-agent'
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: + umask 0022
Jan 30 23:28:54 np0005603435 ovn_metadata_agent[155995]: + exec neutron-ovn-metadata-agent
Jan 30 23:28:54 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:28:54 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:28:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:28:54 np0005603435 systemd[1]: Started ovn_metadata_agent container.
Jan 30 23:28:54 np0005603435 optimistic_chaplygin[155923]: {}
Jan 30 23:28:54 np0005603435 lvm[156144]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:28:54 np0005603435 lvm[156144]: VG ceph_vg2 finished
Jan 30 23:28:54 np0005603435 lvm[156139]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:28:54 np0005603435 lvm[156139]: VG ceph_vg0 finished
Jan 30 23:28:54 np0005603435 lvm[156143]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:28:54 np0005603435 lvm[156143]: VG ceph_vg1 finished
Jan 30 23:28:54 np0005603435 systemd[1]: libpod-83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4.scope: Deactivated successfully.
Jan 30 23:28:54 np0005603435 systemd[1]: libpod-83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4.scope: Consumed 1.235s CPU time.
Jan 30 23:28:54 np0005603435 conmon[155923]: conmon 83f558fff713153e50e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4.scope/container/memory.events
Jan 30 23:28:54 np0005603435 podman[156169]: 2026-01-31 04:28:54.472533 +0000 UTC m=+0.038860693 container died 83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:28:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8258e71333e2a430fe1417ecd86582043dbf9c3573c4d7c8f4f1cbd0756f5c49-merged.mount: Deactivated successfully.
Jan 30 23:28:54 np0005603435 podman[156169]: 2026-01-31 04:28:54.522546574 +0000 UTC m=+0.088874247 container remove 83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chaplygin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:28:54 np0005603435 systemd[1]: libpod-conmon-83f558fff713153e50e22b2c2e649fc3fb48f98abd453d2594d516ddb92a82f4.scope: Deactivated successfully.
Jan 30 23:28:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:28:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:28:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:55 np0005603435 python3.9[156337]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 30 23:28:55 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:55 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.856 156017 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.856 156017 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.856 156017 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.857 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.857 156017 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.857 156017 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.857 156017 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.857 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.857 156017 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.858 156017 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.859 156017 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.860 156017 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.861 156017 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.862 156017 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.863 156017 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.864 156017 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.864 156017 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.864 156017 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.864 156017 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.864 156017 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.864 156017 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.864 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.864 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.865 156017 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.866 156017 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.867 156017 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.868 156017 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.868 156017 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.868 156017 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.868 156017 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.868 156017 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.868 156017 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.868 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.868 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.869 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.870 156017 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.870 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.870 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.870 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.870 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.870 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.870 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.870 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.871 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.872 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.873 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.874 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.875 156017 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.876 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.877 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.878 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.879 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.880 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.881 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.882 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.883 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.883 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.883 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.883 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.883 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.883 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.883 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.884 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.884 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.884 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.884 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.884 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.884 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.885 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.885 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.885 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.885 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.885 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.885 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.885 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.886 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.886 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.886 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.886 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.886 156017 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.886 156017 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.886 156017 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.886 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.887 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.888 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.889 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.890 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.890 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.890 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.890 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.890 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.890 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.890 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.890 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.891 156017 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.891 156017 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.899 156017 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.900 156017 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.900 156017 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.900 156017 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.900 156017 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.913 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 8e8c9464-4b9f-4423-88e0-e5889c10f4ca (UUID: 8e8c9464-4b9f-4423-88e0-e5889c10f4ca) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.937 156017 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.937 156017 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.937 156017 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.937 156017 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.940 156017 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.945 156017 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.983 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '8e8c9464-4b9f-4423-88e0-e5889c10f4ca'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], external_ids={}, name=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, nb_cfg_timestamp=1769833673602, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.984 156017 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f67bc22cc10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.985 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.985 156017 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.985 156017 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.986 156017 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.990 156017 DEBUG oslo_service.service [-] Started child 156490 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.992 156017 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpb4pk0svz/privsep.sock']#033[00m
Jan 30 23:28:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:55.994 156490 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-429801'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.024 156490 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.025 156490 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.025 156490 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.030 156490 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.038 156490 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.054 156490 INFO eventlet.wsgi.server [-] (156490) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 30 23:28:56 np0005603435 python3.9[156489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:28:56 np0005603435 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 30 23:28:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.653 156017 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.654 156017 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpb4pk0svz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.537 156620 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.542 156620 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.546 156620 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.546 156620 INFO oslo.privsep.daemon [-] privsep daemon running as pid 156620#033[00m
Jan 30 23:28:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:56.657 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[6c492132-f4eb-4881-833d-d9aad4b524c2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:28:56 np0005603435 python3.9[156619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833735.7201045-484-91865401764030/.source.yaml _original_basename=.gswu_7rm follow=False checksum=dfa1362badfdf920b5ee45b1fd4c35ca8767a825 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.072 156620 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.072 156620 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.072 156620 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:28:57 np0005603435 systemd[1]: session-48.scope: Deactivated successfully.
Jan 30 23:28:57 np0005603435 systemd[1]: session-48.scope: Consumed 55.630s CPU time.
Jan 30 23:28:57 np0005603435 systemd-logind[816]: Session 48 logged out. Waiting for processes to exit.
Jan 30 23:28:57 np0005603435 systemd-logind[816]: Removed session 48.
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.519 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[4497e8ba-6dec-4ed2-b0c0-8269ff9ba0f0]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.522 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, column=external_ids, values=({'neutron:ovn-metadata-id': '750d00b3-c3e0-5563-99b5-0b6fbe942665'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.531 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.538 156017 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.538 156017 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.538 156017 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.538 156017 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.539 156017 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.539 156017 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.539 156017 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.539 156017 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.540 156017 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.540 156017 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.540 156017 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.540 156017 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.541 156017 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.541 156017 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.541 156017 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.541 156017 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.542 156017 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.542 156017 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.542 156017 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.542 156017 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.542 156017 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.543 156017 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.543 156017 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.543 156017 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.543 156017 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.544 156017 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.544 156017 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.544 156017 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.545 156017 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.545 156017 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.545 156017 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.545 156017 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.546 156017 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.546 156017 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.546 156017 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.546 156017 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.547 156017 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.547 156017 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.547 156017 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.548 156017 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.548 156017 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.548 156017 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.548 156017 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.548 156017 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.549 156017 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.549 156017 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.549 156017 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.549 156017 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.550 156017 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.550 156017 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.550 156017 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.550 156017 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.550 156017 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.551 156017 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.551 156017 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.551 156017 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.551 156017 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.552 156017 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.552 156017 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.552 156017 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.552 156017 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.552 156017 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.553 156017 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.553 156017 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.553 156017 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.553 156017 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.554 156017 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.554 156017 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.554 156017 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.554 156017 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.555 156017 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.555 156017 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.555 156017 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.555 156017 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.556 156017 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.556 156017 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.556 156017 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.556 156017 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.557 156017 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.557 156017 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.557 156017 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.558 156017 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.558 156017 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.558 156017 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.558 156017 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.558 156017 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.559 156017 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.559 156017 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.559 156017 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.559 156017 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.560 156017 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.560 156017 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.560 156017 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.560 156017 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.561 156017 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.561 156017 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.561 156017 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.561 156017 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.561 156017 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.562 156017 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.562 156017 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.562 156017 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.562 156017 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.562 156017 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.563 156017 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.563 156017 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.563 156017 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.563 156017 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.564 156017 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.564 156017 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.564 156017 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.565 156017 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.565 156017 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.565 156017 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.565 156017 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.565 156017 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.566 156017 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.566 156017 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.566 156017 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.566 156017 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.567 156017 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.567 156017 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.567 156017 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.567 156017 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.568 156017 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.568 156017 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.568 156017 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.568 156017 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.569 156017 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.569 156017 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.569 156017 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.569 156017 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.570 156017 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.570 156017 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.570 156017 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.570 156017 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.571 156017 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.571 156017 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.571 156017 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.571 156017 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.572 156017 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.572 156017 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.572 156017 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.572 156017 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.572 156017 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.573 156017 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.573 156017 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.573 156017 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.573 156017 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.574 156017 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.574 156017 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.574 156017 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.574 156017 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.575 156017 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.575 156017 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.575 156017 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.575 156017 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.575 156017 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.575 156017 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.576 156017 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.576 156017 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.576 156017 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.577 156017 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.577 156017 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.577 156017 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.577 156017 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.577 156017 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.578 156017 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.578 156017 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.578 156017 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.578 156017 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.578 156017 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.579 156017 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.579 156017 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.579 156017 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.580 156017 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.580 156017 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.580 156017 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.580 156017 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.581 156017 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.581 156017 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.581 156017 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.581 156017 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.581 156017 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.582 156017 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.582 156017 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.582 156017 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.582 156017 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.583 156017 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.583 156017 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.583 156017 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.583 156017 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.583 156017 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.583 156017 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.584 156017 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.584 156017 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.584 156017 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.584 156017 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.584 156017 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.584 156017 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.584 156017 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.585 156017 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.585 156017 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.585 156017 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.585 156017 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.585 156017 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.585 156017 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.585 156017 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.586 156017 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.586 156017 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.586 156017 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.586 156017 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.586 156017 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.586 156017 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.586 156017 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.586 156017 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.587 156017 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.587 156017 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.587 156017 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.587 156017 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.587 156017 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.587 156017 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.587 156017 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.588 156017 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.588 156017 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.588 156017 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.588 156017 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.588 156017 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.588 156017 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.588 156017 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.588 156017 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.589 156017 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.589 156017 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.589 156017 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.589 156017 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.589 156017 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.589 156017 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.589 156017 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.590 156017 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.590 156017 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.590 156017 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.590 156017 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.590 156017 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.590 156017 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.590 156017 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.591 156017 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.591 156017 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.591 156017 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.591 156017 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.591 156017 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.591 156017 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.591 156017 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.591 156017 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.592 156017 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.592 156017 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.592 156017 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.592 156017 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.592 156017 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.592 156017 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.592 156017 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.593 156017 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.593 156017 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.593 156017 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.593 156017 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.593 156017 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.593 156017 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.593 156017 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.594 156017 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.594 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.594 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.594 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.594 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.594 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.594 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.595 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.595 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.595 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.595 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.595 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.595 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.595 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.596 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.596 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.596 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.596 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.596 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.596 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.596 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.596 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.597 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.597 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.597 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.597 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.597 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.597 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.597 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.598 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.598 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.598 156017 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.598 156017 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.598 156017 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.598 156017 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.598 156017 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:28:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:28:57.599 156017 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 30 23:28:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:28:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:28:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.7 total, 600.0 interval#012Cumulative writes: 8218 writes, 34K keys, 8218 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8218 writes, 1599 syncs, 5.14 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8218 writes, 34K keys, 8218 commit groups, 1.0 writes per commit group, ingest: 21.06 MB, 0.04 MB/s#012Interval WAL: 8218 writes, 1599 syncs, 5.14 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.7 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b8190cda30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.7 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b8190cda30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.7 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 30 23:28:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:02 np0005603435 systemd-logind[816]: New session 49 of user zuul.
Jan 30 23:29:02 np0005603435 systemd[1]: Started Session 49 of User zuul.
Jan 30 23:29:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:03 np0005603435 python3.9[156802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:29:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:04 np0005603435 python3.9[156958]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:29:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.1 total, 600.0 interval#012Cumulative writes: 5584 writes, 24K keys, 5584 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5584 writes, 840 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5584 writes, 24K keys, 5584 commit groups, 1.0 writes per commit group, ingest: 18.72 MB, 0.03 MB/s#012Interval WAL: 5584 writes, 840 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611172278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611172278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 30 23:29:05 np0005603435 python3.9[157123]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 30 23:29:05 np0005603435 systemd[1]: Reloading.
Jan 30 23:29:05 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:29:05 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:29:06
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.meta', '.mgr']
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:29:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:29:06 np0005603435 python3.9[157307]: ansible-ansible.builtin.service_facts Invoked
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:29:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:29:07 np0005603435 network[157324]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:29:07 np0005603435 network[157325]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:29:07 np0005603435 network[157326]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:29:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:11 np0005603435 python3.9[157588]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:29:12 np0005603435 python3.9[157741]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:29:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:12 np0005603435 python3.9[157894]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:29:13 np0005603435 python3.9[158047]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:29:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:14 np0005603435 python3.9[158200]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:29:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:15 np0005603435 python3.9[158353]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:29:16 np0005603435 python3.9[158506]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:29:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:29:17 np0005603435 python3.9[158659]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:17 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] Check health
Jan 30 23:29:17 np0005603435 podman[158783]: 2026-01-31 04:29:17.919630541 +0000 UTC m=+0.182769045 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:29:17 np0005603435 python3.9[158830]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:18 np0005603435 python3.9[158989]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:19 np0005603435 python3.9[159141]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:20 np0005603435 python3.9[159293]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:20 np0005603435 python3.9[159445]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:21 np0005603435 python3.9[159597]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:22 np0005603435 python3.9[159749]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:22 np0005603435 python3.9[159901]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:23 np0005603435 python3.9[160053]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:24 np0005603435 podman[160205]: 2026-01-31 04:29:24.223338766 +0000 UTC m=+0.097897529 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:29:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:24 np0005603435 python3.9[160206]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:24 np0005603435 python3.9[160377]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:25 np0005603435 python3.9[160529]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:26 np0005603435 python3.9[160681]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:29:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:26 np0005603435 python3.9[160833]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:27 np0005603435 python3.9[160985]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 30 23:29:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:28 np0005603435 python3.9[161137]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 30 23:29:28 np0005603435 systemd[1]: Reloading.
Jan 30 23:29:28 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:29:28 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:29:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:29 np0005603435 python3.9[161324]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:30 np0005603435 python3.9[161477]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:31 np0005603435 python3.9[161630]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:31 np0005603435 python3.9[161783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:32 np0005603435 python3.9[161936]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:33 np0005603435 python3.9[162089]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:33 np0005603435 python3.9[162242]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:29:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:35 np0005603435 python3.9[162395]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 30 23:29:36 np0005603435 python3.9[162548]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 30 23:29:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:29:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:29:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:29:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:29:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:29:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:29:36 np0005603435 python3.9[162706]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 30 23:29:37 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:29:37 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:29:38 np0005603435 python3.9[162867]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:29:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:39 np0005603435 python3.9[162951]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:29:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:48 np0005603435 podman[162966]: 2026-01-31 04:29:48.163384434 +0000 UTC m=+0.120887577 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 30 23:29:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:29:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:54 np0005603435 podman[163157]: 2026-01-31 04:29:54.878426897 +0000 UTC m=+0.071508597 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:29:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:29:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:29:55.893 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:29:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:29:55.894 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:29:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:29:55.894 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:29:55 np0005603435 podman[163325]: 2026-01-31 04:29:55.928049301 +0000 UTC m=+0.057337768 container create 1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kapitsa, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:29:55 np0005603435 systemd[1]: Started libpod-conmon-1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405.scope.
Jan 30 23:29:55 np0005603435 podman[163325]: 2026-01-31 04:29:55.905243527 +0000 UTC m=+0.034531994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:29:56 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:29:56 np0005603435 podman[163325]: 2026-01-31 04:29:56.024360689 +0000 UTC m=+0.153649176 container init 1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kapitsa, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:29:56 np0005603435 podman[163325]: 2026-01-31 04:29:56.031159797 +0000 UTC m=+0.160448254 container start 1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 30 23:29:56 np0005603435 podman[163325]: 2026-01-31 04:29:56.034815317 +0000 UTC m=+0.164103794 container attach 1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kapitsa, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:29:56 np0005603435 systemd[1]: libpod-1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405.scope: Deactivated successfully.
Jan 30 23:29:56 np0005603435 compassionate_kapitsa[163342]: 167 167
Jan 30 23:29:56 np0005603435 conmon[163342]: conmon 1add32508ecd6e7384b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405.scope/container/memory.events
Jan 30 23:29:56 np0005603435 podman[163325]: 2026-01-31 04:29:56.038424517 +0000 UTC m=+0.167712964 container died 1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kapitsa, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 30 23:29:56 np0005603435 systemd[1]: var-lib-containers-storage-overlay-49daa272938f92d73655804c168302dee20d65f55e3fbd8184232762175db796-merged.mount: Deactivated successfully.
Jan 30 23:29:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 30 23:29:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:29:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:29:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:29:56 np0005603435 podman[163325]: 2026-01-31 04:29:56.090473082 +0000 UTC m=+0.219761559 container remove 1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:29:56 np0005603435 systemd[1]: libpod-conmon-1add32508ecd6e7384b4a9c68fd439f32161d4b586caae227ef20432f4001405.scope: Deactivated successfully.
Jan 30 23:29:56 np0005603435 podman[163366]: 2026-01-31 04:29:56.294722486 +0000 UTC m=+0.068774239 container create 1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:29:56 np0005603435 systemd[1]: Started libpod-conmon-1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60.scope.
Jan 30 23:29:56 np0005603435 podman[163366]: 2026-01-31 04:29:56.265841213 +0000 UTC m=+0.039893056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:29:56 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:29:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8afdd651f539a68dcd0b0454a94c4261353d720cb14519adfc8789cf053f49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8afdd651f539a68dcd0b0454a94c4261353d720cb14519adfc8789cf053f49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8afdd651f539a68dcd0b0454a94c4261353d720cb14519adfc8789cf053f49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8afdd651f539a68dcd0b0454a94c4261353d720cb14519adfc8789cf053f49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8afdd651f539a68dcd0b0454a94c4261353d720cb14519adfc8789cf053f49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:56 np0005603435 podman[163366]: 2026-01-31 04:29:56.413211823 +0000 UTC m=+0.187263646 container init 1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:29:56 np0005603435 podman[163366]: 2026-01-31 04:29:56.418869313 +0000 UTC m=+0.192921086 container start 1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:29:56 np0005603435 podman[163366]: 2026-01-31 04:29:56.422938203 +0000 UTC m=+0.196990026 container attach 1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:29:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:56 np0005603435 thirsty_clarke[163383]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:29:56 np0005603435 thirsty_clarke[163383]: --> All data devices are unavailable
Jan 30 23:29:56 np0005603435 systemd[1]: libpod-1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60.scope: Deactivated successfully.
Jan 30 23:29:56 np0005603435 podman[163366]: 2026-01-31 04:29:56.909570672 +0000 UTC m=+0.683622445 container died 1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_clarke, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:29:56 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3c8afdd651f539a68dcd0b0454a94c4261353d720cb14519adfc8789cf053f49-merged.mount: Deactivated successfully.
Jan 30 23:29:56 np0005603435 podman[163366]: 2026-01-31 04:29:56.96940966 +0000 UTC m=+0.743461443 container remove 1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_clarke, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:29:56 np0005603435 systemd[1]: libpod-conmon-1aba78cf7481d773e62c0e8779c2346dc48391504ad0cf9da00d864fffcdab60.scope: Deactivated successfully.
Jan 30 23:29:57 np0005603435 podman[163480]: 2026-01-31 04:29:57.483921347 +0000 UTC m=+0.061104110 container create e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_galois, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 30 23:29:57 np0005603435 systemd[1]: Started libpod-conmon-e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3.scope.
Jan 30 23:29:57 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:29:57 np0005603435 podman[163480]: 2026-01-31 04:29:57.457257579 +0000 UTC m=+0.034440402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:29:57 np0005603435 podman[163480]: 2026-01-31 04:29:57.557694919 +0000 UTC m=+0.134877732 container init e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_galois, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:29:57 np0005603435 podman[163480]: 2026-01-31 04:29:57.564262421 +0000 UTC m=+0.141445154 container start e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_galois, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 30 23:29:57 np0005603435 podman[163480]: 2026-01-31 04:29:57.567623984 +0000 UTC m=+0.144806717 container attach e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:29:57 np0005603435 fervent_galois[163496]: 167 167
Jan 30 23:29:57 np0005603435 systemd[1]: libpod-e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3.scope: Deactivated successfully.
Jan 30 23:29:57 np0005603435 podman[163480]: 2026-01-31 04:29:57.569721796 +0000 UTC m=+0.146904539 container died e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_galois, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 30 23:29:57 np0005603435 systemd[1]: var-lib-containers-storage-overlay-6e3dfd5a79a57a296b9c4c5bdd2c532eaf8dd8f283af7337963d376f895f30f7-merged.mount: Deactivated successfully.
Jan 30 23:29:57 np0005603435 podman[163480]: 2026-01-31 04:29:57.611792655 +0000 UTC m=+0.188975428 container remove e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:29:57 np0005603435 systemd[1]: libpod-conmon-e714d83a7125c3994aee763baaec6c2c6649fa40f28d7473b8413c5eb8a526d3.scope: Deactivated successfully.
Jan 30 23:29:57 np0005603435 podman[163521]: 2026-01-31 04:29:57.779346293 +0000 UTC m=+0.059846458 container create c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:29:57 np0005603435 systemd[1]: Started libpod-conmon-c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57.scope.
Jan 30 23:29:57 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:29:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f842d3c62f37fec79f29606ffe9b6f1a4289dcc0a9615d5b4e1f28ee10eabb3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f842d3c62f37fec79f29606ffe9b6f1a4289dcc0a9615d5b4e1f28ee10eabb3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f842d3c62f37fec79f29606ffe9b6f1a4289dcc0a9615d5b4e1f28ee10eabb3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:57 np0005603435 podman[163521]: 2026-01-31 04:29:57.753515126 +0000 UTC m=+0.034015391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:29:57 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f842d3c62f37fec79f29606ffe9b6f1a4289dcc0a9615d5b4e1f28ee10eabb3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:57 np0005603435 podman[163521]: 2026-01-31 04:29:57.870038683 +0000 UTC m=+0.150538928 container init c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 30 23:29:57 np0005603435 podman[163521]: 2026-01-31 04:29:57.880034039 +0000 UTC m=+0.160534244 container start c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dubinsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:29:57 np0005603435 podman[163521]: 2026-01-31 04:29:57.884259674 +0000 UTC m=+0.164759879 container attach c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]: {
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:    "0": [
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:        {
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "devices": [
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "/dev/loop3"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            ],
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_name": "ceph_lv0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_size": "21470642176",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "name": "ceph_lv0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "tags": {
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cluster_name": "ceph",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.crush_device_class": "",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.encrypted": "0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.objectstore": "bluestore",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osd_id": "0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.type": "block",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.vdo": "0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.with_tpm": "0"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            },
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "type": "block",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "vg_name": "ceph_vg0"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:        }
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:    ],
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:    "1": [
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:        {
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "devices": [
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "/dev/loop4"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            ],
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_name": "ceph_lv1",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_size": "21470642176",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "name": "ceph_lv1",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "tags": {
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cluster_name": "ceph",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.crush_device_class": "",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.encrypted": "0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.objectstore": "bluestore",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osd_id": "1",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.type": "block",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.vdo": "0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.with_tpm": "0"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            },
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "type": "block",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "vg_name": "ceph_vg1"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:        }
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:    ],
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:    "2": [
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:        {
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "devices": [
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "/dev/loop5"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            ],
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_name": "ceph_lv2",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_size": "21470642176",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "name": "ceph_lv2",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "tags": {
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.cluster_name": "ceph",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.crush_device_class": "",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.encrypted": "0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.objectstore": "bluestore",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osd_id": "2",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.type": "block",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.vdo": "0",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:                "ceph.with_tpm": "0"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            },
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "type": "block",
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:            "vg_name": "ceph_vg2"
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:        }
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]:    ]
Jan 30 23:29:58 np0005603435 beautiful_dubinsky[163538]: }
Jan 30 23:29:58 np0005603435 systemd[1]: libpod-c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57.scope: Deactivated successfully.
Jan 30 23:29:58 np0005603435 podman[163521]: 2026-01-31 04:29:58.202215736 +0000 UTC m=+0.482715951 container died c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dubinsky, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:29:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f842d3c62f37fec79f29606ffe9b6f1a4289dcc0a9615d5b4e1f28ee10eabb3d-merged.mount: Deactivated successfully.
Jan 30 23:29:58 np0005603435 podman[163521]: 2026-01-31 04:29:58.262377122 +0000 UTC m=+0.542877337 container remove c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:29:58 np0005603435 systemd[1]: libpod-conmon-c3580fad82f327d4d48f9656aec6bee6d0f86d388746651e47f67a4e9c24ad57.scope: Deactivated successfully.
Jan 30 23:29:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:29:58 np0005603435 podman[163625]: 2026-01-31 04:29:58.773997498 +0000 UTC m=+0.062479504 container create 568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hopper, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:29:58 np0005603435 systemd[1]: Started libpod-conmon-568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209.scope.
Jan 30 23:29:58 np0005603435 podman[163625]: 2026-01-31 04:29:58.749853142 +0000 UTC m=+0.038335208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:29:58 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:29:58 np0005603435 podman[163625]: 2026-01-31 04:29:58.87328485 +0000 UTC m=+0.161766906 container init 568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 30 23:29:58 np0005603435 podman[163625]: 2026-01-31 04:29:58.883500723 +0000 UTC m=+0.171982699 container start 568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 30 23:29:58 np0005603435 podman[163625]: 2026-01-31 04:29:58.888489846 +0000 UTC m=+0.176971922 container attach 568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:29:58 np0005603435 blissful_hopper[163642]: 167 167
Jan 30 23:29:58 np0005603435 systemd[1]: libpod-568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209.scope: Deactivated successfully.
Jan 30 23:29:58 np0005603435 podman[163625]: 2026-01-31 04:29:58.891391738 +0000 UTC m=+0.179873754 container died 568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hopper, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:29:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-47f0503fe8de4bdc2cb787d178de9c67c2ad4524fffe2f613bb797939404c347-merged.mount: Deactivated successfully.
Jan 30 23:29:58 np0005603435 podman[163625]: 2026-01-31 04:29:58.93481751 +0000 UTC m=+0.223299476 container remove 568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:29:58 np0005603435 systemd[1]: libpod-conmon-568d4299b1ed5ac3a3d48ef779681862d38fc640ef6150fb3fe742d71da5b209.scope: Deactivated successfully.
Jan 30 23:29:59 np0005603435 podman[163665]: 2026-01-31 04:29:59.107492155 +0000 UTC m=+0.060758172 container create 5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:29:59 np0005603435 systemd[1]: Started libpod-conmon-5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337.scope.
Jan 30 23:29:59 np0005603435 podman[163665]: 2026-01-31 04:29:59.082043436 +0000 UTC m=+0.035309513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:29:59 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:29:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7def0095144cd291ab85dc850ce75621ac0a64c2ccd9abd38eb3c89c7368b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7def0095144cd291ab85dc850ce75621ac0a64c2ccd9abd38eb3c89c7368b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7def0095144cd291ab85dc850ce75621ac0a64c2ccd9abd38eb3c89c7368b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7def0095144cd291ab85dc850ce75621ac0a64c2ccd9abd38eb3c89c7368b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:29:59 np0005603435 podman[163665]: 2026-01-31 04:29:59.24413926 +0000 UTC m=+0.197405297 container init 5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:29:59 np0005603435 podman[163665]: 2026-01-31 04:29:59.25345398 +0000 UTC m=+0.206720007 container start 5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_benz, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:29:59 np0005603435 podman[163665]: 2026-01-31 04:29:59.257720095 +0000 UTC m=+0.210986092 container attach 5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 30 23:29:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:00 np0005603435 lvm[163763]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:30:00 np0005603435 lvm[163764]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:30:00 np0005603435 lvm[163764]: VG ceph_vg1 finished
Jan 30 23:30:00 np0005603435 lvm[163763]: VG ceph_vg0 finished
Jan 30 23:30:00 np0005603435 lvm[163766]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:30:00 np0005603435 lvm[163766]: VG ceph_vg2 finished
Jan 30 23:30:00 np0005603435 zealous_benz[163681]: {}
Jan 30 23:30:00 np0005603435 systemd[1]: libpod-5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337.scope: Deactivated successfully.
Jan 30 23:30:00 np0005603435 systemd[1]: libpod-5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337.scope: Consumed 1.183s CPU time.
Jan 30 23:30:00 np0005603435 podman[163665]: 2026-01-31 04:30:00.13411795 +0000 UTC m=+1.087383977 container died 5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:30:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-2d7def0095144cd291ab85dc850ce75621ac0a64c2ccd9abd38eb3c89c7368b7-merged.mount: Deactivated successfully.
Jan 30 23:30:00 np0005603435 podman[163665]: 2026-01-31 04:30:00.198669804 +0000 UTC m=+1.151935831 container remove 5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_benz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:30:00 np0005603435 systemd[1]: libpod-conmon-5a33e1d9605d81999498cea89bd88057f489fd4b75cb047954a05e3809d8b337.scope: Deactivated successfully.
Jan 30 23:30:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:30:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:30:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:30:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:30:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:01 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:30:01 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:30:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:30:06
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'vms']
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:30:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:30:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:30:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 30 23:30:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 30 23:30:10 np0005603435 kernel: SELinux:  Converting 2778 SID table entries...
Jan 30 23:30:10 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 23:30:10 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 23:30:10 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 23:30:10 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 23:30:10 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 23:30:10 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 23:30:10 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 23:30:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 30 23:30:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:30:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:30:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 30 23:30:19 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 30 23:30:19 np0005603435 podman[163825]: 2026-01-31 04:30:19.153731548 +0000 UTC m=+0.108675999 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 30 23:30:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:19 np0005603435 kernel: SELinux:  Converting 2778 SID table entries...
Jan 30 23:30:19 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 23:30:19 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 23:30:19 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 23:30:19 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 23:30:19 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 23:30:19 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 23:30:19 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 23:30:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Jan 30 23:30:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 30 23:30:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:25 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 30 23:30:25 np0005603435 podman[163860]: 2026-01-31 04:30:25.122375483 +0000 UTC m=+0.071218221 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 30 23:30:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:30:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:30:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:30:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:30:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:30:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:30:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:50 np0005603435 podman[176534]: 2026-01-31 04:30:50.161606111 +0000 UTC m=+0.128774187 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:30:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:30:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:30:55.894 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:30:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:30:55.894 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:30:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:30:55.894 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:30:56 np0005603435 podman[180420]: 2026-01-31 04:30:56.11259655 +0000 UTC m=+0.074790392 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 30 23:30:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:30:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:00 np0005603435 podman[180889]: 2026-01-31 04:31:00.954481588 +0000 UTC m=+0.091609847 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:31:01 np0005603435 podman[180889]: 2026-01-31 04:31:01.055038071 +0000 UTC m=+0.192166280 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:31:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:31:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:31:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:31:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:31:03 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:31:03 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:03 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:31:03 np0005603435 podman[181212]: 2026-01-31 04:31:03.123678042 +0000 UTC m=+0.074708850 container create b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:31:03 np0005603435 systemd[1]: Started libpod-conmon-b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b.scope.
Jan 30 23:31:03 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:31:03 np0005603435 podman[181212]: 2026-01-31 04:31:03.093871828 +0000 UTC m=+0.044902696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:31:03 np0005603435 podman[181212]: 2026-01-31 04:31:03.202535865 +0000 UTC m=+0.153566703 container init b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:31:03 np0005603435 podman[181212]: 2026-01-31 04:31:03.210018765 +0000 UTC m=+0.161049593 container start b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:31:03 np0005603435 podman[181212]: 2026-01-31 04:31:03.21418774 +0000 UTC m=+0.165218578 container attach b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:31:03 np0005603435 friendly_antonelli[181228]: 167 167
Jan 30 23:31:03 np0005603435 systemd[1]: libpod-b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b.scope: Deactivated successfully.
Jan 30 23:31:03 np0005603435 podman[181212]: 2026-01-31 04:31:03.216318274 +0000 UTC m=+0.167349112 container died b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:31:03 np0005603435 systemd[1]: var-lib-containers-storage-overlay-043f2a954db843a22e989c5b298bbf92c83d05044d101796c2ba7fd47ee36be9-merged.mount: Deactivated successfully.
Jan 30 23:31:03 np0005603435 podman[181212]: 2026-01-31 04:31:03.263977859 +0000 UTC m=+0.215008697 container remove b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:31:03 np0005603435 systemd[1]: libpod-conmon-b42bd0932993222791f55502aa5839bfb49c4bf92bf2dde828b9d98fdc42870b.scope: Deactivated successfully.
Jan 30 23:31:03 np0005603435 podman[181252]: 2026-01-31 04:31:03.462805716 +0000 UTC m=+0.059915886 container create 4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:31:03 np0005603435 systemd[1]: Started libpod-conmon-4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44.scope.
Jan 30 23:31:03 np0005603435 podman[181252]: 2026-01-31 04:31:03.437131557 +0000 UTC m=+0.034241797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:31:03 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:31:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b29e1d6ad8b07203214391ce50f14450f802971c384bb7188f95cc85b2c2871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b29e1d6ad8b07203214391ce50f14450f802971c384bb7188f95cc85b2c2871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b29e1d6ad8b07203214391ce50f14450f802971c384bb7188f95cc85b2c2871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b29e1d6ad8b07203214391ce50f14450f802971c384bb7188f95cc85b2c2871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b29e1d6ad8b07203214391ce50f14450f802971c384bb7188f95cc85b2c2871/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:03 np0005603435 podman[181252]: 2026-01-31 04:31:03.562997399 +0000 UTC m=+0.160107589 container init 4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shockley, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:31:03 np0005603435 podman[181252]: 2026-01-31 04:31:03.575382372 +0000 UTC m=+0.172492552 container start 4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:31:03 np0005603435 podman[181252]: 2026-01-31 04:31:03.579457435 +0000 UTC m=+0.176567615 container attach 4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:31:04 np0005603435 nostalgic_shockley[181269]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:31:04 np0005603435 nostalgic_shockley[181269]: --> All data devices are unavailable
Jan 30 23:31:04 np0005603435 systemd[1]: libpod-4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44.scope: Deactivated successfully.
Jan 30 23:31:04 np0005603435 podman[181252]: 2026-01-31 04:31:04.063432402 +0000 UTC m=+0.660542572 container died 4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:31:04 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5b29e1d6ad8b07203214391ce50f14450f802971c384bb7188f95cc85b2c2871-merged.mount: Deactivated successfully.
Jan 30 23:31:04 np0005603435 podman[181252]: 2026-01-31 04:31:04.114424721 +0000 UTC m=+0.711534901 container remove 4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:31:04 np0005603435 systemd[1]: libpod-conmon-4f614b17eec79fc639c1daf7569631489c746cb9fb3c9c0c02e2d780b26a2c44.scope: Deactivated successfully.
Jan 30 23:31:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:04 np0005603435 podman[181363]: 2026-01-31 04:31:04.553324128 +0000 UTC m=+0.044417754 container create 96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:31:04 np0005603435 systemd[1]: Started libpod-conmon-96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967.scope.
Jan 30 23:31:04 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:31:04 np0005603435 podman[181363]: 2026-01-31 04:31:04.604530192 +0000 UTC m=+0.095623848 container init 96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:31:04 np0005603435 podman[181363]: 2026-01-31 04:31:04.608665187 +0000 UTC m=+0.099758823 container start 96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:31:04 np0005603435 nostalgic_panini[181380]: 167 167
Jan 30 23:31:04 np0005603435 systemd[1]: libpod-96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967.scope: Deactivated successfully.
Jan 30 23:31:04 np0005603435 podman[181363]: 2026-01-31 04:31:04.61234542 +0000 UTC m=+0.103439056 container attach 96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:31:04 np0005603435 podman[181363]: 2026-01-31 04:31:04.613596662 +0000 UTC m=+0.104690298 container died 96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:31:04 np0005603435 systemd[1]: var-lib-containers-storage-overlay-46530ff299fec2c4776db5a39f4b7860e7a5aaf70b2a2de21eebd85f77b8cab3-merged.mount: Deactivated successfully.
Jan 30 23:31:04 np0005603435 podman[181363]: 2026-01-31 04:31:04.538961965 +0000 UTC m=+0.030055621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:31:04 np0005603435 podman[181363]: 2026-01-31 04:31:04.641387824 +0000 UTC m=+0.132481460 container remove 96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:31:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:04 np0005603435 systemd[1]: libpod-conmon-96386a89f2776fcfd80f6e5668130c81917a44eb32d8863c8372052494ebf967.scope: Deactivated successfully.
Jan 30 23:31:04 np0005603435 podman[181404]: 2026-01-31 04:31:04.754448763 +0000 UTC m=+0.041138631 container create bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_spence, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:31:04 np0005603435 systemd[1]: Started libpod-conmon-bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68.scope.
Jan 30 23:31:04 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:31:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6ef72c627c9141f7a65b42dbff023cb06fb0e801adf8320553038d123d006a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6ef72c627c9141f7a65b42dbff023cb06fb0e801adf8320553038d123d006a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6ef72c627c9141f7a65b42dbff023cb06fb0e801adf8320553038d123d006a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6ef72c627c9141f7a65b42dbff023cb06fb0e801adf8320553038d123d006a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:04 np0005603435 podman[181404]: 2026-01-31 04:31:04.828808053 +0000 UTC m=+0.115497971 container init bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_spence, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 30 23:31:04 np0005603435 podman[181404]: 2026-01-31 04:31:04.741208528 +0000 UTC m=+0.027898426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:31:04 np0005603435 podman[181404]: 2026-01-31 04:31:04.843179086 +0000 UTC m=+0.129868984 container start bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_spence, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 30 23:31:04 np0005603435 podman[181404]: 2026-01-31 04:31:04.847456214 +0000 UTC m=+0.134146142 container attach bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_spence, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:31:05 np0005603435 stoic_spence[181421]: {
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:    "0": [
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:        {
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "devices": [
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "/dev/loop3"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            ],
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_name": "ceph_lv0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_size": "21470642176",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "name": "ceph_lv0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "tags": {
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cluster_name": "ceph",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.crush_device_class": "",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.encrypted": "0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.objectstore": "bluestore",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osd_id": "0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.type": "block",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.vdo": "0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.with_tpm": "0"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            },
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "type": "block",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "vg_name": "ceph_vg0"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:        }
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:    ],
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:    "1": [
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:        {
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "devices": [
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "/dev/loop4"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            ],
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_name": "ceph_lv1",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_size": "21470642176",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "name": "ceph_lv1",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "tags": {
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cluster_name": "ceph",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.crush_device_class": "",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.encrypted": "0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.objectstore": "bluestore",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osd_id": "1",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.type": "block",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.vdo": "0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.with_tpm": "0"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            },
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "type": "block",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "vg_name": "ceph_vg1"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:        }
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:    ],
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:    "2": [
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:        {
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "devices": [
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "/dev/loop5"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            ],
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_name": "ceph_lv2",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_size": "21470642176",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "name": "ceph_lv2",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "tags": {
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.cluster_name": "ceph",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.crush_device_class": "",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.encrypted": "0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.objectstore": "bluestore",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osd_id": "2",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.type": "block",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.vdo": "0",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:                "ceph.with_tpm": "0"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            },
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "type": "block",
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:            "vg_name": "ceph_vg2"
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:        }
Jan 30 23:31:05 np0005603435 stoic_spence[181421]:    ]
Jan 30 23:31:05 np0005603435 stoic_spence[181421]: }
Jan 30 23:31:05 np0005603435 systemd[1]: libpod-bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68.scope: Deactivated successfully.
Jan 30 23:31:05 np0005603435 podman[181404]: 2026-01-31 04:31:05.103508468 +0000 UTC m=+0.390198376 container died bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_spence, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:31:05 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0e6ef72c627c9141f7a65b42dbff023cb06fb0e801adf8320553038d123d006a-merged.mount: Deactivated successfully.
Jan 30 23:31:05 np0005603435 podman[181404]: 2026-01-31 04:31:05.150029674 +0000 UTC m=+0.436719582 container remove bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:31:05 np0005603435 systemd[1]: libpod-conmon-bfe2572cbb4d342d4bc7fcc9ae2c6785dd0ed7fbb6790794f623b4db31625b68.scope: Deactivated successfully.
Jan 30 23:31:05 np0005603435 podman[181504]: 2026-01-31 04:31:05.537542302 +0000 UTC m=+0.032076082 container create d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:31:05 np0005603435 systemd[1]: Started libpod-conmon-d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564.scope.
Jan 30 23:31:05 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:31:05 np0005603435 podman[181504]: 2026-01-31 04:31:05.607889781 +0000 UTC m=+0.102423561 container init d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hopper, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:31:05 np0005603435 podman[181504]: 2026-01-31 04:31:05.616709504 +0000 UTC m=+0.111243294 container start d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:31:05 np0005603435 pensive_hopper[181520]: 167 167
Jan 30 23:31:05 np0005603435 podman[181504]: 2026-01-31 04:31:05.522632245 +0000 UTC m=+0.017166045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:31:05 np0005603435 systemd[1]: libpod-d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564.scope: Deactivated successfully.
Jan 30 23:31:05 np0005603435 podman[181504]: 2026-01-31 04:31:05.621878254 +0000 UTC m=+0.116412054 container attach d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:31:05 np0005603435 podman[181504]: 2026-01-31 04:31:05.623094085 +0000 UTC m=+0.117627865 container died d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hopper, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:31:05 np0005603435 systemd[1]: var-lib-containers-storage-overlay-650302b7e07ff0403ed430ee275d77fb712a0c7cde2d1a259b84f16fca5d9f72-merged.mount: Deactivated successfully.
Jan 30 23:31:05 np0005603435 podman[181504]: 2026-01-31 04:31:05.653965806 +0000 UTC m=+0.148499586 container remove d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hopper, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:31:05 np0005603435 systemd[1]: libpod-conmon-d44c5e74d4489157782fc3d8572182468d7db93a8d2ab3f6d463c980e3f13564.scope: Deactivated successfully.
Jan 30 23:31:05 np0005603435 podman[181544]: 2026-01-31 04:31:05.80288035 +0000 UTC m=+0.044969587 container create 06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:31:05 np0005603435 systemd[1]: Started libpod-conmon-06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765.scope.
Jan 30 23:31:05 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:31:05 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47149332aca9050fc5b90fa49d32cefc023ba59f7695c62c9f28dffe3c194f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:05 np0005603435 podman[181544]: 2026-01-31 04:31:05.782363741 +0000 UTC m=+0.024452958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:31:05 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47149332aca9050fc5b90fa49d32cefc023ba59f7695c62c9f28dffe3c194f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:05 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47149332aca9050fc5b90fa49d32cefc023ba59f7695c62c9f28dffe3c194f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:05 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c47149332aca9050fc5b90fa49d32cefc023ba59f7695c62c9f28dffe3c194f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:31:05 np0005603435 podman[181544]: 2026-01-31 04:31:05.897306947 +0000 UTC m=+0.139396234 container init 06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:31:05 np0005603435 podman[181544]: 2026-01-31 04:31:05.910441519 +0000 UTC m=+0.152530766 container start 06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:31:05 np0005603435 podman[181544]: 2026-01-31 04:31:05.914658056 +0000 UTC m=+0.156747303 container attach 06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:31:06
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'images', 'volumes', '.mgr', 'vms', 'backups', 'default.rgw.control']
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:31:06 np0005603435 lvm[181640]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:31:06 np0005603435 lvm[181640]: VG ceph_vg1 finished
Jan 30 23:31:06 np0005603435 lvm[181639]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:31:06 np0005603435 lvm[181639]: VG ceph_vg0 finished
Jan 30 23:31:06 np0005603435 lvm[181642]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:31:06 np0005603435 lvm[181642]: VG ceph_vg2 finished
Jan 30 23:31:06 np0005603435 tender_williamson[181561]: {}
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:06 np0005603435 systemd[1]: libpod-06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765.scope: Deactivated successfully.
Jan 30 23:31:06 np0005603435 systemd[1]: libpod-06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765.scope: Consumed 1.146s CPU time.
Jan 30 23:31:06 np0005603435 podman[181544]: 2026-01-31 04:31:06.668476545 +0000 UTC m=+0.910565762 container died 06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:31:06 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c47149332aca9050fc5b90fa49d32cefc023ba59f7695c62c9f28dffe3c194f3-merged.mount: Deactivated successfully.
Jan 30 23:31:06 np0005603435 podman[181544]: 2026-01-31 04:31:06.714341744 +0000 UTC m=+0.956430961 container remove 06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williamson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:31:06 np0005603435 systemd[1]: libpod-conmon-06ac7b57cfd3ed44c8ef83e5eac4dbfb9326abf7ad3687c3626b4471b7ba7765.scope: Deactivated successfully.
Jan 30 23:31:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:31:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:31:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:31:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:31:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:31:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:07 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:31:08 np0005603435 kernel: SELinux:  Converting 2779 SID table entries...
Jan 30 23:31:08 np0005603435 kernel: SELinux:  policy capability network_peer_controls=1
Jan 30 23:31:08 np0005603435 kernel: SELinux:  policy capability open_perms=1
Jan 30 23:31:08 np0005603435 kernel: SELinux:  policy capability extended_socket_class=1
Jan 30 23:31:08 np0005603435 kernel: SELinux:  policy capability always_check_network=0
Jan 30 23:31:08 np0005603435 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 30 23:31:08 np0005603435 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 30 23:31:08 np0005603435 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 30 23:31:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:09 np0005603435 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Jan 30 23:31:09 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 30 23:31:09 np0005603435 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Jan 30 23:31:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:16 np0005603435 systemd[1]: Stopping OpenSSH server daemon...
Jan 30 23:31:16 np0005603435 systemd[1]: sshd.service: Deactivated successfully.
Jan 30 23:31:16 np0005603435 systemd[1]: Stopped OpenSSH server daemon.
Jan 30 23:31:16 np0005603435 systemd[1]: sshd.service: Consumed 3.157s CPU time, read 32.0K from disk, written 24.0K to disk.
Jan 30 23:31:16 np0005603435 systemd[1]: Stopped target sshd-keygen.target.
Jan 30 23:31:16 np0005603435 systemd[1]: Stopping sshd-keygen.target...
Jan 30 23:31:16 np0005603435 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 30 23:31:16 np0005603435 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 30 23:31:16 np0005603435 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 30 23:31:16 np0005603435 systemd[1]: Reached target sshd-keygen.target.
Jan 30 23:31:16 np0005603435 systemd[1]: Starting OpenSSH server daemon...
Jan 30 23:31:16 np0005603435 systemd[1]: Started OpenSSH server daemon.
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:31:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:31:18 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:31:18 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:31:18 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:18 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:18 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:18 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 23:31:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:21 np0005603435 podman[186031]: 2026-01-31 04:31:21.148065248 +0000 UTC m=+0.119293056 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 30 23:31:22 np0005603435 python3.9[187453]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:31:22 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:22 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:22 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:23 np0005603435 python3.9[188917]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:31:23 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:23 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:23 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:24 np0005603435 python3.9[190185]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:31:24 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:24 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:24 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:25 np0005603435 python3.9[191342]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:31:25 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:25 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:25 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:26 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:31:26 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:31:26 np0005603435 systemd[1]: man-db-cache-update.service: Consumed 9.783s CPU time.
Jan 30 23:31:26 np0005603435 systemd[1]: run-rdd48c914c828495f80d816325adae10d.service: Deactivated successfully.
Jan 30 23:31:26 np0005603435 podman[191999]: 2026-01-31 04:31:26.324600283 +0000 UTC m=+0.074902584 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 30 23:31:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:26 np0005603435 python3.9[192142]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:26 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:27 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:27 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:28 np0005603435 python3.9[192333]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:28 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:28 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:28 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:29 np0005603435 python3.9[192523]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:29 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:29 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:29 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:30 np0005603435 python3.9[192713]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:32 np0005603435 python3.9[192868]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:32 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:32 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:32 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:33 np0005603435 python3.9[193058]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 30 23:31:33 np0005603435 systemd[1]: Reloading.
Jan 30 23:31:33 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:31:33 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:31:34 np0005603435 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 30 23:31:34 np0005603435 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 30 23:31:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:35 np0005603435 python3.9[193251]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:35 np0005603435 python3.9[193406]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:36 np0005603435 python3.9[193561]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:31:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:31:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:31:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:31:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:31:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:31:37 np0005603435 python3.9[193716]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:38 np0005603435 python3.9[193871]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:40 np0005603435 python3.9[194026]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:40 np0005603435 python3.9[194181]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:41 np0005603435 python3.9[194336]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:42 np0005603435 python3.9[194491]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:43 np0005603435 python3.9[194646]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:44 np0005603435 python3.9[194801]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:45 np0005603435 python3.9[194956]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:45 np0005603435 python3.9[195111]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.133513) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833906133583, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2037, "num_deletes": 251, "total_data_size": 3554081, "memory_usage": 3603328, "flush_reason": "Manual Compaction"}
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833906153751, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3477836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9772, "largest_seqno": 11808, "table_properties": {"data_size": 3468569, "index_size": 5889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17748, "raw_average_key_size": 19, "raw_value_size": 3450246, "raw_average_value_size": 3779, "num_data_blocks": 267, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833672, "oldest_key_time": 1769833672, "file_creation_time": 1769833906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 20333 microseconds, and 9425 cpu microseconds.
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.153838) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3477836 bytes OK
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.153874) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.155350) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.155372) EVENT_LOG_v1 {"time_micros": 1769833906155364, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.155406) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3545595, prev total WAL file size 3545595, number of live WAL files 2.
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.156668) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3396KB)], [26(5971KB)]
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833906156721, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9592480, "oldest_snapshot_seqno": -1}
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3744 keys, 8025520 bytes, temperature: kUnknown
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833906204388, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8025520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7996773, "index_size": 18269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 89948, "raw_average_key_size": 24, "raw_value_size": 7925524, "raw_average_value_size": 2116, "num_data_blocks": 790, "num_entries": 3744, "num_filter_entries": 3744, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769833906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.204740) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8025520 bytes
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.206329) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.7 rd, 168.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4258, records dropped: 514 output_compression: NoCompression
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.206373) EVENT_LOG_v1 {"time_micros": 1769833906206351, "job": 10, "event": "compaction_finished", "compaction_time_micros": 47785, "compaction_time_cpu_micros": 27664, "output_level": 6, "num_output_files": 1, "total_output_size": 8025520, "num_input_records": 4258, "num_output_records": 3744, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833906207472, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769833906209091, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.156547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.209289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.209295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.209298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.209301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:31:46 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:31:46.209304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:31:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:46 np0005603435 python3.9[195266]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 30 23:31:47 np0005603435 python3.9[195421]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:31:48 np0005603435 python3.9[195573]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:31:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:49 np0005603435 python3.9[195725]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:31:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:49 np0005603435 python3.9[195877]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:31:50 np0005603435 python3.9[196029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:31:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:51 np0005603435 python3.9[196181]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:31:51 np0005603435 podman[196182]: 2026-01-31 04:31:51.358705422 +0000 UTC m=+0.092292652 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 30 23:31:52 np0005603435 python3.9[196358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:31:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:53 np0005603435 python3.9[196510]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:31:53 np0005603435 python3.9[196635]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769833912.295986-557-100696771038698/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:31:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:54 np0005603435 python3.9[196787]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:31:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:55 np0005603435 python3.9[196912]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769833913.9996586-557-183060617829655/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:31:55 np0005603435 python3.9[197064]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:31:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:31:55.894 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:31:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:31:55.895 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:31:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:31:55.896 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:31:56 np0005603435 python3.9[197189]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769833915.2175272-557-98999757727661/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:31:56 np0005603435 podman[197190]: 2026-01-31 04:31:56.500245144 +0000 UTC m=+0.094572178 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 30 23:31:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:57 np0005603435 python3.9[197360]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:31:57 np0005603435 python3.9[197485]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769833916.5806074-557-26841116514228/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:31:58 np0005603435 python3.9[197637]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:31:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:31:58 np0005603435 python3.9[197762]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769833917.91113-557-154191629816285/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:31:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:31:59 np0005603435 python3.9[197914]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:00 np0005603435 python3.9[198039]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769833919.119387-557-35866036553090/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:00 np0005603435 python3.9[198191]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:01 np0005603435 python3.9[198314]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769833920.291978-557-215986398831186/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:01 np0005603435 python3.9[198466]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:02 np0005603435 python3.9[198591]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769833921.4063907-557-226165207586536/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:03 np0005603435 python3.9[198743]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 30 23:32:04 np0005603435 python3.9[198896]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:04 np0005603435 python3.9[199048]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:05 np0005603435 python3.9[199200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:06 np0005603435 python3.9[199352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:32:06
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['images', 'volumes', '.rgw.root', 'backups', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:06 np0005603435 python3.9[199504]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:32:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:32:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:32:07 np0005603435 python3.9[199723]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:32:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:32:07 np0005603435 podman[199889]: 2026-01-31 04:32:07.913532657 +0000 UTC m=+0.046924686 container create 4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:32:07 np0005603435 systemd[1]: Started libpod-conmon-4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31.scope.
Jan 30 23:32:07 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:32:07 np0005603435 podman[199889]: 2026-01-31 04:32:07.897094522 +0000 UTC m=+0.030486551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:32:07 np0005603435 podman[199889]: 2026-01-31 04:32:07.993782171 +0000 UTC m=+0.127174260 container init 4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:32:08 np0005603435 podman[199889]: 2026-01-31 04:32:08.001607474 +0000 UTC m=+0.134999493 container start 4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:32:08 np0005603435 friendly_sammet[199939]: 167 167
Jan 30 23:32:08 np0005603435 systemd[1]: libpod-4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31.scope: Deactivated successfully.
Jan 30 23:32:08 np0005603435 conmon[199939]: conmon 4c8e7602a6d21f992c98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31.scope/container/memory.events
Jan 30 23:32:08 np0005603435 podman[199889]: 2026-01-31 04:32:08.006719289 +0000 UTC m=+0.140111328 container attach 4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:32:08 np0005603435 podman[199889]: 2026-01-31 04:32:08.007462388 +0000 UTC m=+0.140854477 container died 4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:32:08 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7c73ecf9b32eedf7fca6403de91f771cf4f6c5c8e7b23efd23b539950971145c-merged.mount: Deactivated successfully.
Jan 30 23:32:08 np0005603435 podman[199889]: 2026-01-31 04:32:08.052352982 +0000 UTC m=+0.185745011 container remove 4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_sammet, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:32:08 np0005603435 systemd[1]: libpod-conmon-4c8e7602a6d21f992c98634006162c905ab18d3ac591f101eda5e07279001a31.scope: Deactivated successfully.
Jan 30 23:32:08 np0005603435 podman[199993]: 2026-01-31 04:32:08.22173806 +0000 UTC m=+0.048387882 container create e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaum, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 30 23:32:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:32:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:32:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:32:08 np0005603435 python3.9[199985]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:08 np0005603435 systemd[1]: Started libpod-conmon-e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020.scope.
Jan 30 23:32:08 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:32:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb37b36cbca0a1f66bf791ce4997304c283d5903a06f008808c19717ef9ece2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb37b36cbca0a1f66bf791ce4997304c283d5903a06f008808c19717ef9ece2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb37b36cbca0a1f66bf791ce4997304c283d5903a06f008808c19717ef9ece2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb37b36cbca0a1f66bf791ce4997304c283d5903a06f008808c19717ef9ece2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb37b36cbca0a1f66bf791ce4997304c283d5903a06f008808c19717ef9ece2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:08 np0005603435 podman[199993]: 2026-01-31 04:32:08.204918576 +0000 UTC m=+0.031568428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:32:08 np0005603435 podman[199993]: 2026-01-31 04:32:08.301772469 +0000 UTC m=+0.128422341 container init e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:32:08 np0005603435 podman[199993]: 2026-01-31 04:32:08.314449971 +0000 UTC m=+0.141099793 container start e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:32:08 np0005603435 podman[199993]: 2026-01-31 04:32:08.318097611 +0000 UTC m=+0.144747483 container attach e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 30 23:32:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:08 np0005603435 heuristic_chaum[200010]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:32:08 np0005603435 heuristic_chaum[200010]: --> All data devices are unavailable
Jan 30 23:32:08 np0005603435 systemd[1]: libpod-e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020.scope: Deactivated successfully.
Jan 30 23:32:08 np0005603435 podman[199993]: 2026-01-31 04:32:08.838853653 +0000 UTC m=+0.665503485 container died e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:32:08 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9cb37b36cbca0a1f66bf791ce4997304c283d5903a06f008808c19717ef9ece2-merged.mount: Deactivated successfully.
Jan 30 23:32:08 np0005603435 podman[199993]: 2026-01-31 04:32:08.887965892 +0000 UTC m=+0.714615714 container remove e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaum, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:32:08 np0005603435 systemd[1]: libpod-conmon-e72ac6a3032ebfbaf2994145a34e2f49ab937c1bb28fdbbc2159144dc1158020.scope: Deactivated successfully.
Jan 30 23:32:08 np0005603435 python3.9[200178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:09 np0005603435 podman[200355]: 2026-01-31 04:32:09.339308237 +0000 UTC m=+0.055890186 container create 6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:32:09 np0005603435 systemd[1]: Started libpod-conmon-6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792.scope.
Jan 30 23:32:09 np0005603435 podman[200355]: 2026-01-31 04:32:09.317369367 +0000 UTC m=+0.033951396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:32:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:32:09 np0005603435 podman[200355]: 2026-01-31 04:32:09.439913932 +0000 UTC m=+0.156495931 container init 6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:32:09 np0005603435 podman[200355]: 2026-01-31 04:32:09.447025077 +0000 UTC m=+0.163607046 container start 6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:32:09 np0005603435 podman[200355]: 2026-01-31 04:32:09.450851801 +0000 UTC m=+0.167433760 container attach 6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_roentgen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:32:09 np0005603435 peaceful_roentgen[200413]: 167 167
Jan 30 23:32:09 np0005603435 systemd[1]: libpod-6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792.scope: Deactivated successfully.
Jan 30 23:32:09 np0005603435 conmon[200413]: conmon 6e6f2a1112944c0ff0be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792.scope/container/memory.events
Jan 30 23:32:09 np0005603435 podman[200355]: 2026-01-31 04:32:09.454288686 +0000 UTC m=+0.170870635 container died 6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 30 23:32:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f7354a82242f3398b393f692005b37ca523cf2817407ee45fb24396445495ff7-merged.mount: Deactivated successfully.
Jan 30 23:32:09 np0005603435 podman[200355]: 2026-01-31 04:32:09.49630343 +0000 UTC m=+0.212885379 container remove 6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:32:09 np0005603435 systemd[1]: libpod-conmon-6e6f2a1112944c0ff0be8850d76868e7d6477d133758b62cd1ed2ba9dd4c8792.scope: Deactivated successfully.
Jan 30 23:32:09 np0005603435 python3.9[200428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:09 np0005603435 podman[200449]: 2026-01-31 04:32:09.665683787 +0000 UTC m=+0.061889064 container create 0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elion, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 30 23:32:09 np0005603435 systemd[1]: Started libpod-conmon-0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c.scope.
Jan 30 23:32:09 np0005603435 podman[200449]: 2026-01-31 04:32:09.640900407 +0000 UTC m=+0.037105724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:32:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:32:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86625745560d8b6500340d7ca849ae41785918a79e849d82c01454272133f0c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86625745560d8b6500340d7ca849ae41785918a79e849d82c01454272133f0c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86625745560d8b6500340d7ca849ae41785918a79e849d82c01454272133f0c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86625745560d8b6500340d7ca849ae41785918a79e849d82c01454272133f0c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:09 np0005603435 podman[200449]: 2026-01-31 04:32:09.770738032 +0000 UTC m=+0.166943329 container init 0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:32:09 np0005603435 podman[200449]: 2026-01-31 04:32:09.777244532 +0000 UTC m=+0.173449809 container start 0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elion, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 30 23:32:09 np0005603435 podman[200449]: 2026-01-31 04:32:09.78083736 +0000 UTC m=+0.177042637 container attach 0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elion, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:32:10 np0005603435 angry_elion[200489]: {
Jan 30 23:32:10 np0005603435 angry_elion[200489]:    "0": [
Jan 30 23:32:10 np0005603435 angry_elion[200489]:        {
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "devices": [
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "/dev/loop3"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            ],
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_name": "ceph_lv0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_size": "21470642176",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "name": "ceph_lv0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "tags": {
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cluster_name": "ceph",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.crush_device_class": "",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.encrypted": "0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.objectstore": "bluestore",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osd_id": "0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.type": "block",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.vdo": "0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.with_tpm": "0"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            },
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "type": "block",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "vg_name": "ceph_vg0"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:        }
Jan 30 23:32:10 np0005603435 angry_elion[200489]:    ],
Jan 30 23:32:10 np0005603435 angry_elion[200489]:    "1": [
Jan 30 23:32:10 np0005603435 angry_elion[200489]:        {
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "devices": [
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "/dev/loop4"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            ],
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_name": "ceph_lv1",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_size": "21470642176",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "name": "ceph_lv1",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "tags": {
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cluster_name": "ceph",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.crush_device_class": "",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.encrypted": "0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.objectstore": "bluestore",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osd_id": "1",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.type": "block",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.vdo": "0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.with_tpm": "0"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            },
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "type": "block",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "vg_name": "ceph_vg1"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:        }
Jan 30 23:32:10 np0005603435 angry_elion[200489]:    ],
Jan 30 23:32:10 np0005603435 angry_elion[200489]:    "2": [
Jan 30 23:32:10 np0005603435 angry_elion[200489]:        {
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "devices": [
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "/dev/loop5"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            ],
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_name": "ceph_lv2",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_size": "21470642176",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "name": "ceph_lv2",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "tags": {
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.cluster_name": "ceph",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.crush_device_class": "",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.encrypted": "0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.objectstore": "bluestore",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osd_id": "2",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.type": "block",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.vdo": "0",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:                "ceph.with_tpm": "0"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            },
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "type": "block",
Jan 30 23:32:10 np0005603435 angry_elion[200489]:            "vg_name": "ceph_vg2"
Jan 30 23:32:10 np0005603435 angry_elion[200489]:        }
Jan 30 23:32:10 np0005603435 angry_elion[200489]:    ]
Jan 30 23:32:10 np0005603435 angry_elion[200489]: }
Jan 30 23:32:10 np0005603435 systemd[1]: libpod-0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c.scope: Deactivated successfully.
Jan 30 23:32:10 np0005603435 podman[200449]: 2026-01-31 04:32:10.096570529 +0000 UTC m=+0.492775806 container died 0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elion, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:32:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-86625745560d8b6500340d7ca849ae41785918a79e849d82c01454272133f0c8-merged.mount: Deactivated successfully.
Jan 30 23:32:10 np0005603435 podman[200449]: 2026-01-31 04:32:10.144626161 +0000 UTC m=+0.540831448 container remove 0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elion, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:32:10 np0005603435 systemd[1]: libpod-conmon-0f0f92334995f476a491dbe32b7be06fd964874bb3d08ca2cf70e5ea4242659c.scope: Deactivated successfully.
Jan 30 23:32:10 np0005603435 python3.9[200626]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:10 np0005603435 podman[200774]: 2026-01-31 04:32:10.573505042 +0000 UTC m=+0.043940132 container create f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 30 23:32:10 np0005603435 systemd[1]: Started libpod-conmon-f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac.scope.
Jan 30 23:32:10 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:32:10 np0005603435 podman[200774]: 2026-01-31 04:32:10.646711553 +0000 UTC m=+0.117146713 container init f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_allen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:32:10 np0005603435 podman[200774]: 2026-01-31 04:32:10.55227039 +0000 UTC m=+0.022705500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:32:10 np0005603435 podman[200774]: 2026-01-31 04:32:10.652721681 +0000 UTC m=+0.123156751 container start f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:32:10 np0005603435 podman[200774]: 2026-01-31 04:32:10.655849528 +0000 UTC m=+0.126284678 container attach f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:32:10 np0005603435 blissful_allen[200814]: 167 167
Jan 30 23:32:10 np0005603435 systemd[1]: libpod-f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac.scope: Deactivated successfully.
Jan 30 23:32:10 np0005603435 podman[200774]: 2026-01-31 04:32:10.658310519 +0000 UTC m=+0.128745589 container died f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:32:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-13366695ccd451a3955adaac74193e634324b7aaf1e7002fcd283885ac8e59a7-merged.mount: Deactivated successfully.
Jan 30 23:32:10 np0005603435 podman[200774]: 2026-01-31 04:32:10.698457787 +0000 UTC m=+0.168892887 container remove f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:32:10 np0005603435 systemd[1]: libpod-conmon-f7e37af0adc3babb57536136982288c22e67f7197258cad7f9074d4cd001f5ac.scope: Deactivated successfully.
Jan 30 23:32:10 np0005603435 podman[200890]: 2026-01-31 04:32:10.87417598 +0000 UTC m=+0.044759822 container create 3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:32:10 np0005603435 systemd[1]: Started libpod-conmon-3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d.scope.
Jan 30 23:32:10 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:32:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c288b7017d08b232ecb331fb4ed4805a3344faca115c904cb67b19d44ecb4fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c288b7017d08b232ecb331fb4ed4805a3344faca115c904cb67b19d44ecb4fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c288b7017d08b232ecb331fb4ed4805a3344faca115c904cb67b19d44ecb4fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c288b7017d08b232ecb331fb4ed4805a3344faca115c904cb67b19d44ecb4fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:32:10 np0005603435 podman[200890]: 2026-01-31 04:32:10.856618958 +0000 UTC m=+0.027202880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:32:10 np0005603435 podman[200890]: 2026-01-31 04:32:10.955683415 +0000 UTC m=+0.126267277 container init 3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:32:10 np0005603435 podman[200890]: 2026-01-31 04:32:10.971377312 +0000 UTC m=+0.141961164 container start 3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hofstadter, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:32:10 np0005603435 podman[200890]: 2026-01-31 04:32:10.97458469 +0000 UTC m=+0.145181113 container attach 3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 30 23:32:11 np0005603435 python3.9[200884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:11 np0005603435 python3.9[201112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:11 np0005603435 lvm[201138]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:32:11 np0005603435 lvm[201138]: VG ceph_vg1 finished
Jan 30 23:32:11 np0005603435 lvm[201137]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:32:11 np0005603435 lvm[201137]: VG ceph_vg0 finished
Jan 30 23:32:11 np0005603435 lvm[201140]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:32:11 np0005603435 lvm[201140]: VG ceph_vg2 finished
Jan 30 23:32:11 np0005603435 reverent_hofstadter[200907]: {}
Jan 30 23:32:11 np0005603435 systemd[1]: libpod-3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d.scope: Deactivated successfully.
Jan 30 23:32:11 np0005603435 systemd[1]: libpod-3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d.scope: Consumed 1.220s CPU time.
Jan 30 23:32:11 np0005603435 podman[200890]: 2026-01-31 04:32:11.826769338 +0000 UTC m=+0.997353220 container died 3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:32:11 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c288b7017d08b232ecb331fb4ed4805a3344faca115c904cb67b19d44ecb4fb1-merged.mount: Deactivated successfully.
Jan 30 23:32:11 np0005603435 podman[200890]: 2026-01-31 04:32:11.883699739 +0000 UTC m=+1.054283591 container remove 3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hofstadter, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:32:11 np0005603435 systemd[1]: libpod-conmon-3fc420f829970c181a6dea4d35e9768d72825a7e56f0a7c74bfff329cdeb9f3d.scope: Deactivated successfully.
Jan 30 23:32:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:32:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:32:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:32:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:32:12 np0005603435 python3.9[201330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:32:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:32:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:12 np0005603435 python3.9[201482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:13 np0005603435 python3.9[201634]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:14 np0005603435 python3.9[201757]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833933.1335423-778-109671573926632/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:15 np0005603435 python3.9[201909]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:15 np0005603435 python3.9[202032]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833934.5184414-778-113392098066640/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:16 np0005603435 python3.9[202184]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:32:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:32:16 np0005603435 python3.9[202307]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833935.8670192-778-194954435943647/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:17 np0005603435 python3.9[202459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:18 np0005603435 python3.9[202582]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833937.1102958-778-280272490118380/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:18 np0005603435 python3.9[202734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:19 np0005603435 python3.9[202857]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833938.433015-778-273818883553307/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:20 np0005603435 python3.9[203009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:20 np0005603435 python3.9[203132]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833939.6508412-778-15478766415110/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:21 np0005603435 python3.9[203284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:21 np0005603435 podman[203379]: 2026-01-31 04:32:21.868342083 +0000 UTC m=+0.117184294 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 30 23:32:21 np0005603435 python3.9[203424]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833940.8234768-778-1761169773599/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:22 np0005603435 python3.9[203585]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:23 np0005603435 python3.9[203708]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833942.255866-778-104385569168911/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:23 np0005603435 python3.9[203860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:24 np0005603435 python3.9[203983]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833943.4123118-778-186857181949191/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:25 np0005603435 python3.9[204135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:25 np0005603435 python3.9[204258]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833944.6453328-778-95667595146229/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:26 np0005603435 python3.9[204410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:26 np0005603435 podman[204505]: 2026-01-31 04:32:26.941682618 +0000 UTC m=+0.077811828 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 30 23:32:27 np0005603435 python3.9[204550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833945.923159-778-101312221652746/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:27 np0005603435 auditd[704]: Audit daemon rotating log files
Jan 30 23:32:27 np0005603435 python3.9[204704]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:28 np0005603435 python3.9[204827]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833947.2941864-778-196543448967339/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:29 np0005603435 python3.9[204979]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:29 np0005603435 python3.9[205102]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833948.599626-778-69015079283862/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:30 np0005603435 python3.9[205254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:30 np0005603435 python3.9[205377]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833949.8765037-778-227168533455511/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:31 np0005603435 python3.9[205527]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:32:32 np0005603435 python3.9[205682]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 30 23:32:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:34 np0005603435 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 30 23:32:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:34 np0005603435 python3.9[205839]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:35 np0005603435 python3.9[205991]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:35 np0005603435 python3.9[206143]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:36 np0005603435 python3.9[206295]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:32:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:32:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:32:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:32:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:32:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:32:37 np0005603435 python3.9[206447]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:37 np0005603435 python3.9[206599]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:38 np0005603435 python3.9[206751]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:38 np0005603435 python3.9[206903]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:39 np0005603435 python3.9[207055]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:40 np0005603435 python3.9[207207]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:41 np0005603435 python3.9[207359]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:32:41 np0005603435 systemd[1]: Reloading.
Jan 30 23:32:41 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:32:41 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:32:41 np0005603435 systemd[1]: Starting libvirt logging daemon socket...
Jan 30 23:32:41 np0005603435 systemd[1]: Listening on libvirt logging daemon socket.
Jan 30 23:32:41 np0005603435 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 30 23:32:41 np0005603435 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 30 23:32:41 np0005603435 systemd[1]: Starting libvirt logging daemon...
Jan 30 23:32:41 np0005603435 systemd[1]: Started libvirt logging daemon.
Jan 30 23:32:42 np0005603435 python3.9[207552]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:32:42 np0005603435 systemd[1]: Reloading.
Jan 30 23:32:42 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:32:42 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:32:42 np0005603435 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 30 23:32:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:42 np0005603435 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 30 23:32:42 np0005603435 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 30 23:32:42 np0005603435 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 30 23:32:42 np0005603435 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 30 23:32:42 np0005603435 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 30 23:32:42 np0005603435 systemd[1]: Starting libvirt nodedev daemon...
Jan 30 23:32:42 np0005603435 systemd[1]: Started libvirt nodedev daemon.
Jan 30 23:32:43 np0005603435 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 30 23:32:43 np0005603435 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 30 23:32:43 np0005603435 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 30 23:32:43 np0005603435 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 30 23:32:43 np0005603435 python3.9[207770]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:32:43 np0005603435 systemd[1]: Reloading.
Jan 30 23:32:43 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:32:43 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:32:43 np0005603435 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 30 23:32:43 np0005603435 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 30 23:32:43 np0005603435 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 30 23:32:44 np0005603435 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 30 23:32:44 np0005603435 systemd[1]: Starting libvirt proxy daemon...
Jan 30 23:32:44 np0005603435 systemd[1]: Started libvirt proxy daemon.
Jan 30 23:32:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:44 np0005603435 setroubleshoot[207698]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a91d3eeb-cf1b-46aa-933e-a6c78f11f6e8
Jan 30 23:32:44 np0005603435 setroubleshoot[207698]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 30 23:32:44 np0005603435 setroubleshoot[207698]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a91d3eeb-cf1b-46aa-933e-a6c78f11f6e8
Jan 30 23:32:44 np0005603435 setroubleshoot[207698]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 30 23:32:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:44 np0005603435 python3.9[207990]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:32:44 np0005603435 systemd[1]: Reloading.
Jan 30 23:32:44 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:32:45 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:32:45 np0005603435 systemd[1]: Listening on libvirt locking daemon socket.
Jan 30 23:32:45 np0005603435 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 30 23:32:45 np0005603435 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 30 23:32:45 np0005603435 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 30 23:32:45 np0005603435 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 30 23:32:45 np0005603435 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 30 23:32:45 np0005603435 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 30 23:32:45 np0005603435 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 30 23:32:45 np0005603435 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 30 23:32:45 np0005603435 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 30 23:32:45 np0005603435 systemd[1]: Starting libvirt QEMU daemon...
Jan 30 23:32:45 np0005603435 systemd[1]: Started libvirt QEMU daemon.
Jan 30 23:32:46 np0005603435 python3.9[208205]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:32:46 np0005603435 systemd[1]: Reloading.
Jan 30 23:32:46 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:32:46 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:32:46 np0005603435 systemd[1]: Starting libvirt secret daemon socket...
Jan 30 23:32:46 np0005603435 systemd[1]: Listening on libvirt secret daemon socket.
Jan 30 23:32:46 np0005603435 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 30 23:32:46 np0005603435 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 30 23:32:46 np0005603435 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 30 23:32:46 np0005603435 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 30 23:32:46 np0005603435 systemd[1]: Starting libvirt secret daemon...
Jan 30 23:32:46 np0005603435 systemd[1]: Started libvirt secret daemon.
Jan 30 23:32:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:47 np0005603435 python3.9[208416]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:48 np0005603435 python3.9[208568]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 30 23:32:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:48 np0005603435 python3.9[208720]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:32:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:49 np0005603435 python3.9[208874]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 30 23:32:50 np0005603435 python3.9[209024]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:50 np0005603435 python3.9[209145]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833969.8689768-1136-126906600656269/.source.xml follow=False _original_basename=secret.xml.j2 checksum=ee8dacc6ffcbcfbefc73f090a48944199c8adabc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:51 np0005603435 python3.9[209297]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 95d2f419-0dd0-56f2-a094-353f8c7597ed#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:32:52 np0005603435 podman[209384]: 2026-01-31 04:32:52.149324904 +0000 UTC m=+0.108421115 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 30 23:32:52 np0005603435 python3.9[209486]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:54 np0005603435 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 30 23:32:54 np0005603435 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 30 23:32:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:54 np0005603435 python3.9[209949]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:55 np0005603435 python3.9[210101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:32:55.896 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:32:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:32:55.897 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:32:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:32:55.897 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:32:56 np0005603435 python3.9[210224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833975.103761-1191-249391306101571/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:57 np0005603435 python3.9[210376]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:57 np0005603435 podman[210377]: 2026-01-31 04:32:57.095575468 +0000 UTC m=+0.063769747 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 30 23:32:57 np0005603435 python3.9[210549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:58 np0005603435 python3.9[210627]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:32:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:32:58 np0005603435 python3.9[210779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:32:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:32:59 np0005603435 python3.9[210857]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.u95ubvbb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:00 np0005603435 python3.9[211009]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:00 np0005603435 python3.9[211087]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:01 np0005603435 python3.9[211239]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:33:02 np0005603435 python3[211392]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 30 23:33:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:02 np0005603435 python3.9[211544]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:03 np0005603435 python3.9[211622]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:04 np0005603435 python3.9[211774]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:04 np0005603435 python3.9[211899]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833983.5996552-1280-119195913204682/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:05 np0005603435 python3.9[212051]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:05 np0005603435 python3.9[212129]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:33:06
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:06 np0005603435 python3.9[212281]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:33:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:33:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:33:07 np0005603435 python3.9[212359]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:08 np0005603435 python3.9[212511]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:08 np0005603435 python3.9[212636]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769833987.456839-1319-1557852133930/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:09 np0005603435 python3.9[212788]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:10 np0005603435 python3.9[212940]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:33:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:11 np0005603435 python3.9[213095]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:11 np0005603435 python3.9[213247]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:33:12 np0005603435 python3.9[213467]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:33:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:33:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:13 np0005603435 podman[213671]: 2026-01-31 04:33:13.078641669 +0000 UTC m=+0.051283254 container create eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:33:13 np0005603435 systemd[1]: Started libpod-conmon-eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a.scope.
Jan 30 23:33:13 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:33:13 np0005603435 podman[213671]: 2026-01-31 04:33:13.055120841 +0000 UTC m=+0.027762476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:33:13 np0005603435 podman[213671]: 2026-01-31 04:33:13.167962794 +0000 UTC m=+0.140604389 container init eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_black, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:33:13 np0005603435 podman[213671]: 2026-01-31 04:33:13.175916954 +0000 UTC m=+0.148558539 container start eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_black, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:33:13 np0005603435 podman[213671]: 2026-01-31 04:33:13.179847022 +0000 UTC m=+0.152488617 container attach eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_black, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:33:13 np0005603435 distracted_black[213716]: 167 167
Jan 30 23:33:13 np0005603435 systemd[1]: libpod-eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a.scope: Deactivated successfully.
Jan 30 23:33:13 np0005603435 podman[213671]: 2026-01-31 04:33:13.181894003 +0000 UTC m=+0.154535598 container died eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_black, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:33:13 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1a0581aef47f35055f623253cbdd84e7c72be4acd1af3968494d2d9550287241-merged.mount: Deactivated successfully.
Jan 30 23:33:13 np0005603435 podman[213671]: 2026-01-31 04:33:13.23010394 +0000 UTC m=+0.202745525 container remove eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_black, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:33:13 np0005603435 systemd[1]: libpod-conmon-eab89656cfd818e8284fa93e893867b470abe8c5759726fc1f8bdf45f1c9f02a.scope: Deactivated successfully.
Jan 30 23:33:13 np0005603435 python3.9[213713]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:33:13 np0005603435 podman[213742]: 2026-01-31 04:33:13.417386106 +0000 UTC m=+0.055033398 container create 0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leakey, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:33:13 np0005603435 systemd[1]: Started libpod-conmon-0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613.scope.
Jan 30 23:33:13 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:33:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f236784c7375e847bf07cad910999ac98a351613859c08580a279be122d270e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f236784c7375e847bf07cad910999ac98a351613859c08580a279be122d270e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f236784c7375e847bf07cad910999ac98a351613859c08580a279be122d270e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f236784c7375e847bf07cad910999ac98a351613859c08580a279be122d270e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f236784c7375e847bf07cad910999ac98a351613859c08580a279be122d270e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:13 np0005603435 podman[213742]: 2026-01-31 04:33:13.400497444 +0000 UTC m=+0.038144726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:33:13 np0005603435 podman[213742]: 2026-01-31 04:33:13.506473685 +0000 UTC m=+0.144120967 container init 0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leakey, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:33:13 np0005603435 podman[213742]: 2026-01-31 04:33:13.513751528 +0000 UTC m=+0.151398820 container start 0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:33:13 np0005603435 podman[213742]: 2026-01-31 04:33:13.517812249 +0000 UTC m=+0.155459551 container attach 0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:33:13 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:33:13 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:33:13 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:33:13 np0005603435 cranky_leakey[213782]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:33:13 np0005603435 cranky_leakey[213782]: --> All data devices are unavailable
Jan 30 23:33:13 np0005603435 systemd[1]: libpod-0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613.scope: Deactivated successfully.
Jan 30 23:33:13 np0005603435 podman[213742]: 2026-01-31 04:33:13.986346044 +0000 UTC m=+0.623993356 container died 0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:33:14 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f236784c7375e847bf07cad910999ac98a351613859c08580a279be122d270e7-merged.mount: Deactivated successfully.
Jan 30 23:33:14 np0005603435 podman[213742]: 2026-01-31 04:33:14.043512104 +0000 UTC m=+0.681159426 container remove 0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:33:14 np0005603435 python3.9[213923]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:14 np0005603435 systemd[1]: libpod-conmon-0e216f2e51e3fdcfbe0e4e8652d98ad10c17e91b03af3f4b1acd2e95e8900613.scope: Deactivated successfully.
Jan 30 23:33:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:14 np0005603435 podman[214104]: 2026-01-31 04:33:14.523985958 +0000 UTC m=+0.056716000 container create b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:33:14 np0005603435 systemd[1]: Started libpod-conmon-b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a.scope.
Jan 30 23:33:14 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:33:14 np0005603435 podman[214104]: 2026-01-31 04:33:14.498753156 +0000 UTC m=+0.031483288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:33:14 np0005603435 podman[214104]: 2026-01-31 04:33:14.609003386 +0000 UTC m=+0.141733518 container init b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:33:14 np0005603435 podman[214104]: 2026-01-31 04:33:14.618295288 +0000 UTC m=+0.151025360 container start b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:33:14 np0005603435 podman[214104]: 2026-01-31 04:33:14.623020376 +0000 UTC m=+0.155750508 container attach b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:33:14 np0005603435 stupefied_cori[214157]: 167 167
Jan 30 23:33:14 np0005603435 systemd[1]: libpod-b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a.scope: Deactivated successfully.
Jan 30 23:33:14 np0005603435 podman[214104]: 2026-01-31 04:33:14.624288398 +0000 UTC m=+0.157018480 container died b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:33:14 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b83c56fcd845995187c88bc7f989d123e08eefc34d32de2527c06e5ef11b108c-merged.mount: Deactivated successfully.
Jan 30 23:33:14 np0005603435 podman[214104]: 2026-01-31 04:33:14.661833267 +0000 UTC m=+0.194563309 container remove b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_cori, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:33:14 np0005603435 systemd[1]: libpod-conmon-b3788653aab19aea0d5c07a0e67a903ffd10e3519527cb9f5859f3c9eaacc92a.scope: Deactivated successfully.
Jan 30 23:33:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:14 np0005603435 python3.9[214179]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:14 np0005603435 podman[214198]: 2026-01-31 04:33:14.837624986 +0000 UTC m=+0.053819168 container create 946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:33:14 np0005603435 systemd[1]: Started libpod-conmon-946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498.scope.
Jan 30 23:33:14 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:33:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272b718c15c30477360766411806e03effef301f9e33fbc5d9239b86b2544b81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272b718c15c30477360766411806e03effef301f9e33fbc5d9239b86b2544b81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272b718c15c30477360766411806e03effef301f9e33fbc5d9239b86b2544b81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272b718c15c30477360766411806e03effef301f9e33fbc5d9239b86b2544b81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:14 np0005603435 podman[214198]: 2026-01-31 04:33:14.818785924 +0000 UTC m=+0.034980086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:33:14 np0005603435 podman[214198]: 2026-01-31 04:33:14.923038553 +0000 UTC m=+0.139232715 container init 946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 30 23:33:14 np0005603435 podman[214198]: 2026-01-31 04:33:14.928567962 +0000 UTC m=+0.144762124 container start 946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:33:14 np0005603435 podman[214198]: 2026-01-31 04:33:14.93169564 +0000 UTC m=+0.147889792 container attach 946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]: {
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:    "0": [
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:        {
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "devices": [
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "/dev/loop3"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            ],
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_name": "ceph_lv0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_size": "21470642176",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "name": "ceph_lv0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "tags": {
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cluster_name": "ceph",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.crush_device_class": "",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.encrypted": "0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.objectstore": "bluestore",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osd_id": "0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.type": "block",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.vdo": "0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.with_tpm": "0"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            },
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "type": "block",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "vg_name": "ceph_vg0"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:        }
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:    ],
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:    "1": [
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:        {
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "devices": [
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "/dev/loop4"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            ],
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_name": "ceph_lv1",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_size": "21470642176",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "name": "ceph_lv1",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "tags": {
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cluster_name": "ceph",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.crush_device_class": "",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.encrypted": "0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.objectstore": "bluestore",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osd_id": "1",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.type": "block",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.vdo": "0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.with_tpm": "0"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            },
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "type": "block",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "vg_name": "ceph_vg1"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:        }
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:    ],
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:    "2": [
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:        {
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "devices": [
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "/dev/loop5"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            ],
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_name": "ceph_lv2",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_size": "21470642176",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "name": "ceph_lv2",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "tags": {
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.cluster_name": "ceph",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.crush_device_class": "",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.encrypted": "0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.objectstore": "bluestore",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osd_id": "2",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.type": "block",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.vdo": "0",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:                "ceph.with_tpm": "0"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            },
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "type": "block",
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:            "vg_name": "ceph_vg2"
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:        }
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]:    ]
Jan 30 23:33:15 np0005603435 crazy_pascal[214237]: }
Jan 30 23:33:15 np0005603435 systemd[1]: libpod-946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498.scope: Deactivated successfully.
Jan 30 23:33:15 np0005603435 podman[214198]: 2026-01-31 04:33:15.199732767 +0000 UTC m=+0.415926949 container died 946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:33:15 np0005603435 systemd[1]: var-lib-containers-storage-overlay-272b718c15c30477360766411806e03effef301f9e33fbc5d9239b86b2544b81-merged.mount: Deactivated successfully.
Jan 30 23:33:15 np0005603435 podman[214198]: 2026-01-31 04:33:15.244416855 +0000 UTC m=+0.460611037 container remove 946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:33:15 np0005603435 systemd[1]: libpod-conmon-946b1da95f0d9c97f0df3c83ad43ae56cd9fcefa7c6f6d236f9949fdd2744498.scope: Deactivated successfully.
Jan 30 23:33:15 np0005603435 python3.9[214343]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833994.2708077-1391-152057837456124/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:15 np0005603435 podman[214498]: 2026-01-31 04:33:15.674097208 +0000 UTC m=+0.058774682 container create 53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_kare, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:33:15 np0005603435 systemd[1]: Started libpod-conmon-53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b.scope.
Jan 30 23:33:15 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:33:15 np0005603435 podman[214498]: 2026-01-31 04:33:15.650415325 +0000 UTC m=+0.035092839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:33:15 np0005603435 podman[214498]: 2026-01-31 04:33:15.755624798 +0000 UTC m=+0.140302292 container init 53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_kare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 30 23:33:15 np0005603435 podman[214498]: 2026-01-31 04:33:15.764161512 +0000 UTC m=+0.148838986 container start 53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_kare, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:33:15 np0005603435 podman[214498]: 2026-01-31 04:33:15.768779827 +0000 UTC m=+0.153457301 container attach 53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_kare, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:33:15 np0005603435 hungry_kare[214538]: 167 167
Jan 30 23:33:15 np0005603435 systemd[1]: libpod-53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b.scope: Deactivated successfully.
Jan 30 23:33:15 np0005603435 podman[214498]: 2026-01-31 04:33:15.770779957 +0000 UTC m=+0.155457421 container died 53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:33:15 np0005603435 systemd[1]: var-lib-containers-storage-overlay-79b6000a2052596ca9b0a1f3f90051dfa880f18bdb1af99663d6ce826a8e72ed-merged.mount: Deactivated successfully.
Jan 30 23:33:15 np0005603435 podman[214498]: 2026-01-31 04:33:15.816262285 +0000 UTC m=+0.200939759 container remove 53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:33:15 np0005603435 systemd[1]: libpod-conmon-53dba27554bc49ef25b9abf42bb383694ea1edfd7fd1253b1eb2459d0319e21b.scope: Deactivated successfully.
Jan 30 23:33:15 np0005603435 podman[214615]: 2026-01-31 04:33:15.970382012 +0000 UTC m=+0.055013497 container create ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:33:16 np0005603435 systemd[1]: Started libpod-conmon-ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815.scope.
Jan 30 23:33:16 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:33:16 np0005603435 podman[214615]: 2026-01-31 04:33:15.947803207 +0000 UTC m=+0.032434742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:33:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb7799318ee57f893936dc03b57dfeeda1f00c9d1bddd908c56613d45e60d74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb7799318ee57f893936dc03b57dfeeda1f00c9d1bddd908c56613d45e60d74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb7799318ee57f893936dc03b57dfeeda1f00c9d1bddd908c56613d45e60d74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:16 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb7799318ee57f893936dc03b57dfeeda1f00c9d1bddd908c56613d45e60d74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:33:16 np0005603435 python3.9[214609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:16 np0005603435 podman[214615]: 2026-01-31 04:33:16.070902858 +0000 UTC m=+0.155534353 container init ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:33:16 np0005603435 podman[214615]: 2026-01-31 04:33:16.080814056 +0000 UTC m=+0.165445541 container start ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:33:16 np0005603435 podman[214615]: 2026-01-31 04:33:16.084779655 +0000 UTC m=+0.169411110 container attach ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:33:16 np0005603435 python3.9[214776]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833995.5309803-1406-124567410973354/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:16 np0005603435 lvm[214857]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:33:16 np0005603435 lvm[214857]: VG ceph_vg0 finished
Jan 30 23:33:16 np0005603435 lvm[214858]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:33:16 np0005603435 lvm[214858]: VG ceph_vg1 finished
Jan 30 23:33:16 np0005603435 lvm[214860]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:33:16 np0005603435 lvm[214860]: VG ceph_vg2 finished
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:16 np0005603435 angry_nightingale[214632]: {}
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:33:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:33:16 np0005603435 systemd[1]: libpod-ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815.scope: Deactivated successfully.
Jan 30 23:33:16 np0005603435 podman[214615]: 2026-01-31 04:33:16.876347563 +0000 UTC m=+0.960979038 container died ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:33:16 np0005603435 systemd[1]: libpod-ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815.scope: Consumed 1.127s CPU time.
Jan 30 23:33:16 np0005603435 systemd[1]: var-lib-containers-storage-overlay-bdb7799318ee57f893936dc03b57dfeeda1f00c9d1bddd908c56613d45e60d74-merged.mount: Deactivated successfully.
Jan 30 23:33:16 np0005603435 podman[214615]: 2026-01-31 04:33:16.919018561 +0000 UTC m=+1.003650026 container remove ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:33:16 np0005603435 systemd[1]: libpod-conmon-ed7ce94620edf3e2d485d67a653b9b504ecd919c2966cbbcbeac3002f984e815.scope: Deactivated successfully.
Jan 30 23:33:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:33:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:33:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:33:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:33:17 np0005603435 python3.9[215026]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:17 np0005603435 python3.9[215149]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769833996.7849507-1421-85664125154425/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:33:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:33:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:18 np0005603435 python3.9[215301]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:33:18 np0005603435 systemd[1]: Reloading.
Jan 30 23:33:18 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:33:18 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:33:19 np0005603435 systemd[1]: Reached target edpm_libvirt.target.
Jan 30 23:33:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:19 np0005603435 python3.9[215491]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 30 23:33:20 np0005603435 systemd[1]: Reloading.
Jan 30 23:33:20 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:33:20 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:33:20 np0005603435 systemd[1]: Reloading.
Jan 30 23:33:20 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:33:20 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:33:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:21 np0005603435 systemd[1]: session-49.scope: Deactivated successfully.
Jan 30 23:33:21 np0005603435 systemd[1]: session-49.scope: Consumed 3min 22.632s CPU time.
Jan 30 23:33:21 np0005603435 systemd-logind[816]: Session 49 logged out. Waiting for processes to exit.
Jan 30 23:33:21 np0005603435 systemd-logind[816]: Removed session 49.
Jan 30 23:33:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:23 np0005603435 podman[215588]: 2026-01-31 04:33:23.155087932 +0000 UTC m=+0.115498001 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:33:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:26 np0005603435 systemd-logind[816]: New session 50 of user zuul.
Jan 30 23:33:26 np0005603435 systemd[1]: Started Session 50 of User zuul.
Jan 30 23:33:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:27 np0005603435 python3.9[215768]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:33:28 np0005603435 podman[215849]: 2026-01-31 04:33:28.09644105 +0000 UTC m=+0.065348544 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:33:28 np0005603435 python3.9[215943]: ansible-ansible.builtin.service_facts Invoked
Jan 30 23:33:28 np0005603435 network[215960]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:33:28 np0005603435 network[215961]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:33:28 np0005603435 network[215962]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:33:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:32 np0005603435 python3.9[216234]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 30 23:33:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:33 np0005603435 python3.9[216318]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:33:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:33:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:33:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:33:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:33:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:33:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:33:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:39 np0005603435 python3.9[216471]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:33:40 np0005603435 python3.9[216623]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:33:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:41 np0005603435 python3.9[216776]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:33:42 np0005603435 python3.9[216928]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:33:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:42 np0005603435 python3.9[217081]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:43 np0005603435 python3.9[217204]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769834022.2238114-90-173112593284795/.source.iscsi _original_basename=.oj6ic9hp follow=False checksum=5cb1e7565ac27e521ff12237cf446cbe9692049d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:44 np0005603435 python3.9[217356]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:45 np0005603435 python3.9[217508]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:46 np0005603435 python3.9[217660]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:33:46 np0005603435 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 30 23:33:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:47 np0005603435 python3.9[217816]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:33:47 np0005603435 systemd[1]: Reloading.
Jan 30 23:33:47 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:33:47 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:33:47 np0005603435 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 30 23:33:47 np0005603435 systemd[1]: Starting Open-iSCSI...
Jan 30 23:33:47 np0005603435 kernel: Loading iSCSI transport class v2.0-870.
Jan 30 23:33:47 np0005603435 systemd[1]: Started Open-iSCSI.
Jan 30 23:33:47 np0005603435 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 30 23:33:47 np0005603435 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 30 23:33:48 np0005603435 python3.9[218014]: ansible-ansible.builtin.service_facts Invoked
Jan 30 23:33:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:48 np0005603435 network[218031]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:33:48 np0005603435 network[218032]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:33:48 np0005603435 network[218033]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:33:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:52 np0005603435 python3.9[218306]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:33:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:54 np0005603435 podman[218310]: 2026-01-31 04:33:54.137965394 +0000 UTC m=+0.100191556 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 30 23:33:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:55 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:33:55 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:33:55 np0005603435 systemd[1]: Reloading.
Jan 30 23:33:55 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:33:55 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:33:55 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 23:33:55 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:33:55 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:33:55 np0005603435 systemd[1]: run-r8f1f35508a0d40b7954db4bbfea5f6e7.service: Deactivated successfully.
Jan 30 23:33:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:33:55.897 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:33:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:33:55.899 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:33:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:33:55.899 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:33:56 np0005603435 python3.9[218648]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 30 23:33:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:57 np0005603435 python3.9[218800]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 30 23:33:58 np0005603435 python3.9[218956]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:33:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:33:58 np0005603435 podman[219051]: 2026-01-31 04:33:58.765185318 +0000 UTC m=+0.074459635 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:33:58 np0005603435 python3.9[219092]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769834037.6898022-178-232356233121309/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:33:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:33:59 np0005603435 python3.9[219250]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:00 np0005603435 python3.9[219402]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:34:00 np0005603435 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 30 23:34:00 np0005603435 systemd[1]: Stopped Load Kernel Modules.
Jan 30 23:34:00 np0005603435 systemd[1]: Stopping Load Kernel Modules...
Jan 30 23:34:00 np0005603435 systemd[1]: Starting Load Kernel Modules...
Jan 30 23:34:00 np0005603435 systemd[1]: Finished Load Kernel Modules.
Jan 30 23:34:01 np0005603435 python3.9[219558]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:02 np0005603435 python3.9[219711]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:34:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:03 np0005603435 python3.9[219863]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:34:03 np0005603435 python3.9[219986]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769834042.6595557-229-181522261053888/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:04 np0005603435 python3.9[220138]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:05 np0005603435 python3.9[220291]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:06 np0005603435 python3.9[220443]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:34:06
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:34:06 np0005603435 python3.9[220595]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:34:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:34:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:34:07 np0005603435 python3.9[220747]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:08 np0005603435 python3.9[220899]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:08 np0005603435 python3.9[221051]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:09 np0005603435 python3.9[221203]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:10 np0005603435 python3.9[221355]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:34:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:10 np0005603435 python3.9[221509]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:11 np0005603435 python3.9[221662]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:11 np0005603435 systemd[1]: Listening on multipathd control socket.
Jan 30 23:34:12 np0005603435 python3.9[221818]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:12 np0005603435 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 30 23:34:12 np0005603435 udevadm[221823]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 30 23:34:12 np0005603435 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 30 23:34:12 np0005603435 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 30 23:34:12 np0005603435 multipathd[221826]: --------start up--------
Jan 30 23:34:12 np0005603435 multipathd[221826]: read /etc/multipath.conf
Jan 30 23:34:12 np0005603435 multipathd[221826]: path checkers start up
Jan 30 23:34:12 np0005603435 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 30 23:34:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:13 np0005603435 python3.9[221985]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 30 23:34:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:14 np0005603435 python3.9[222137]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 30 23:34:14 np0005603435 kernel: Key type psk registered
Jan 30 23:34:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:15 np0005603435 python3.9[222301]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:34:15 np0005603435 python3.9[222424]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769834054.7825174-359-119614378619026/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:16 np0005603435 python3.9[222576]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:34:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:34:17 np0005603435 python3.9[222778]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:34:17 np0005603435 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 30 23:34:17 np0005603435 systemd[1]: Stopped Load Kernel Modules.
Jan 30 23:34:17 np0005603435 systemd[1]: Stopping Load Kernel Modules...
Jan 30 23:34:17 np0005603435 systemd[1]: Starting Load Kernel Modules...
Jan 30 23:34:17 np0005603435 systemd[1]: Finished Load Kernel Modules.
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:34:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:34:18 np0005603435 podman[223009]: 2026-01-31 04:34:18.111413431 +0000 UTC m=+0.040920607 container create a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:34:18 np0005603435 systemd[1]: Started libpod-conmon-a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e.scope.
Jan 30 23:34:18 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:34:18 np0005603435 podman[223009]: 2026-01-31 04:34:18.170325321 +0000 UTC m=+0.099832537 container init a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 30 23:34:18 np0005603435 podman[223009]: 2026-01-31 04:34:18.174826935 +0000 UTC m=+0.104334081 container start a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mendeleev, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 30 23:34:18 np0005603435 podman[223009]: 2026-01-31 04:34:18.178292422 +0000 UTC m=+0.107799598 container attach a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:18 np0005603435 gracious_mendeleev[223044]: 167 167
Jan 30 23:34:18 np0005603435 systemd[1]: libpod-a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e.scope: Deactivated successfully.
Jan 30 23:34:18 np0005603435 podman[223009]: 2026-01-31 04:34:18.180429957 +0000 UTC m=+0.109937103 container died a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mendeleev, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:34:18 np0005603435 podman[223009]: 2026-01-31 04:34:18.097439477 +0000 UTC m=+0.026946653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:34:18 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7a0d137e47188f421dd5604854de6fadac78bd9599286964890f78108eaffe83-merged.mount: Deactivated successfully.
Jan 30 23:34:18 np0005603435 podman[223009]: 2026-01-31 04:34:18.227487857 +0000 UTC m=+0.156995043 container remove a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:34:18 np0005603435 systemd[1]: libpod-conmon-a7f44a4094d2829d6a9f2546317ac610adaa94f5f20df144306790877d8c660e.scope: Deactivated successfully.
Jan 30 23:34:18 np0005603435 python3.9[223039]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 30 23:34:18 np0005603435 podman[223070]: 2026-01-31 04:34:18.389214379 +0000 UTC m=+0.043650646 container create 40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:34:18 np0005603435 systemd[1]: Started libpod-conmon-40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb.scope.
Jan 30 23:34:18 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:34:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc671ceec4185dae9bd38204b2566add5c96485a0410da684fd7d36354c1b553/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc671ceec4185dae9bd38204b2566add5c96485a0410da684fd7d36354c1b553/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc671ceec4185dae9bd38204b2566add5c96485a0410da684fd7d36354c1b553/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc671ceec4185dae9bd38204b2566add5c96485a0410da684fd7d36354c1b553/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc671ceec4185dae9bd38204b2566add5c96485a0410da684fd7d36354c1b553/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:18 np0005603435 podman[223070]: 2026-01-31 04:34:18.375992024 +0000 UTC m=+0.030428291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:34:18 np0005603435 podman[223070]: 2026-01-31 04:34:18.476510147 +0000 UTC m=+0.130946414 container init 40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:34:18 np0005603435 podman[223070]: 2026-01-31 04:34:18.483953275 +0000 UTC m=+0.138389542 container start 40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 30 23:34:18 np0005603435 podman[223070]: 2026-01-31 04:34:18.491812284 +0000 UTC m=+0.146248571 container attach 40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:34:18 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:34:18 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:34:18 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:34:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:18 np0005603435 suspicious_lichterman[223088]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:34:18 np0005603435 suspicious_lichterman[223088]: --> All data devices are unavailable
Jan 30 23:34:18 np0005603435 systemd[1]: libpod-40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb.scope: Deactivated successfully.
Jan 30 23:34:18 np0005603435 podman[223070]: 2026-01-31 04:34:18.96045172 +0000 UTC m=+0.614888027 container died 40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:18 np0005603435 systemd[1]: var-lib-containers-storage-overlay-fc671ceec4185dae9bd38204b2566add5c96485a0410da684fd7d36354c1b553-merged.mount: Deactivated successfully.
Jan 30 23:34:19 np0005603435 podman[223070]: 2026-01-31 04:34:19.009937062 +0000 UTC m=+0.664373359 container remove 40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:34:19 np0005603435 systemd[1]: libpod-conmon-40a0ec5f55aacd6c72e40a48f55abdfd75bac04fdcbd9bf5a0558e00d6fe02eb.scope: Deactivated successfully.
Jan 30 23:34:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:19 np0005603435 podman[223183]: 2026-01-31 04:34:19.462641204 +0000 UTC m=+0.055054874 container create b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:34:19 np0005603435 systemd[1]: Started libpod-conmon-b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de.scope.
Jan 30 23:34:19 np0005603435 podman[223183]: 2026-01-31 04:34:19.439902379 +0000 UTC m=+0.032316099 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:34:19 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:34:19 np0005603435 podman[223183]: 2026-01-31 04:34:19.55931644 +0000 UTC m=+0.151730150 container init b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:34:19 np0005603435 podman[223183]: 2026-01-31 04:34:19.567404164 +0000 UTC m=+0.159817834 container start b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_gagarin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:34:19 np0005603435 podman[223183]: 2026-01-31 04:34:19.571526429 +0000 UTC m=+0.163940159 container attach b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_gagarin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:19 np0005603435 romantic_gagarin[223201]: 167 167
Jan 30 23:34:19 np0005603435 systemd[1]: libpod-b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de.scope: Deactivated successfully.
Jan 30 23:34:19 np0005603435 podman[223183]: 2026-01-31 04:34:19.573868578 +0000 UTC m=+0.166282248 container died b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_gagarin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:34:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7990c2613bb80b1d3f08d718c48a6a808fcfed769b68462f310a860b6806c7d9-merged.mount: Deactivated successfully.
Jan 30 23:34:19 np0005603435 podman[223183]: 2026-01-31 04:34:19.621028221 +0000 UTC m=+0.213441881 container remove b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:19 np0005603435 systemd[1]: libpod-conmon-b1d25fa63b6433c1e929c7c74ed273c9b252e7d897fe7fcff3308053937e50de.scope: Deactivated successfully.
Jan 30 23:34:19 np0005603435 podman[223225]: 2026-01-31 04:34:19.778057834 +0000 UTC m=+0.048302943 container create a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:19 np0005603435 systemd[1]: Started libpod-conmon-a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4.scope.
Jan 30 23:34:19 np0005603435 podman[223225]: 2026-01-31 04:34:19.753054421 +0000 UTC m=+0.023299580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:34:19 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:34:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df41dcd4f51346ad70297060547bcab093c5848fdbcf45ab607d58fc63026a2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df41dcd4f51346ad70297060547bcab093c5848fdbcf45ab607d58fc63026a2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df41dcd4f51346ad70297060547bcab093c5848fdbcf45ab607d58fc63026a2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df41dcd4f51346ad70297060547bcab093c5848fdbcf45ab607d58fc63026a2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:19 np0005603435 podman[223225]: 2026-01-31 04:34:19.8727901 +0000 UTC m=+0.143035259 container init a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:34:19 np0005603435 podman[223225]: 2026-01-31 04:34:19.880541016 +0000 UTC m=+0.150786125 container start a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:34:19 np0005603435 podman[223225]: 2026-01-31 04:34:19.884958748 +0000 UTC m=+0.155203917 container attach a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]: {
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:    "0": [
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:        {
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "devices": [
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "/dev/loop3"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            ],
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_name": "ceph_lv0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_size": "21470642176",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "name": "ceph_lv0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "tags": {
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cluster_name": "ceph",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.crush_device_class": "",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.encrypted": "0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.objectstore": "bluestore",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osd_id": "0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.type": "block",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.vdo": "0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.with_tpm": "0"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            },
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "type": "block",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "vg_name": "ceph_vg0"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:        }
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:    ],
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:    "1": [
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:        {
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "devices": [
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "/dev/loop4"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            ],
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_name": "ceph_lv1",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_size": "21470642176",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "name": "ceph_lv1",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "tags": {
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cluster_name": "ceph",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.crush_device_class": "",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.encrypted": "0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.objectstore": "bluestore",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osd_id": "1",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.type": "block",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.vdo": "0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.with_tpm": "0"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            },
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "type": "block",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "vg_name": "ceph_vg1"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:        }
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:    ],
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:    "2": [
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:        {
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "devices": [
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "/dev/loop5"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            ],
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_name": "ceph_lv2",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_size": "21470642176",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "name": "ceph_lv2",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "tags": {
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.cluster_name": "ceph",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.crush_device_class": "",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.encrypted": "0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.objectstore": "bluestore",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osd_id": "2",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.type": "block",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.vdo": "0",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:                "ceph.with_tpm": "0"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            },
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "type": "block",
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:            "vg_name": "ceph_vg2"
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:        }
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]:    ]
Jan 30 23:34:20 np0005603435 loving_chandrasekhar[223242]: }
Jan 30 23:34:20 np0005603435 systemd[1]: libpod-a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4.scope: Deactivated successfully.
Jan 30 23:34:20 np0005603435 podman[223225]: 2026-01-31 04:34:20.187620785 +0000 UTC m=+0.457865904 container died a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay-df41dcd4f51346ad70297060547bcab093c5848fdbcf45ab607d58fc63026a2c-merged.mount: Deactivated successfully.
Jan 30 23:34:20 np0005603435 podman[223225]: 2026-01-31 04:34:20.237136468 +0000 UTC m=+0.507381547 container remove a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:34:20 np0005603435 systemd[1]: libpod-conmon-a29fcb93bd3df40c5d68c0e1ad62dcb0d353213b3dd62483d4588bc8fff28af4.scope: Deactivated successfully.
Jan 30 23:34:20 np0005603435 systemd[1]: Reloading.
Jan 30 23:34:20 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:34:20 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:34:20 np0005603435 podman[223361]: 2026-01-31 04:34:20.618295571 +0000 UTC m=+0.049929724 container create a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wu, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:34:20 np0005603435 systemd[1]: Started libpod-conmon-a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2.scope.
Jan 30 23:34:20 np0005603435 systemd[1]: Reloading.
Jan 30 23:34:20 np0005603435 podman[223361]: 2026-01-31 04:34:20.598401867 +0000 UTC m=+0.030035970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:34:20 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:34:20 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:34:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:34:20 np0005603435 podman[223361]: 2026-01-31 04:34:20.979657673 +0000 UTC m=+0.411291716 container init a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wu, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 30 23:34:20 np0005603435 podman[223361]: 2026-01-31 04:34:20.989646545 +0000 UTC m=+0.421280558 container start a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 30 23:34:20 np0005603435 podman[223361]: 2026-01-31 04:34:20.993823801 +0000 UTC m=+0.425457864 container attach a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:20 np0005603435 nostalgic_wu[223379]: 167 167
Jan 30 23:34:20 np0005603435 systemd[1]: libpod-a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2.scope: Deactivated successfully.
Jan 30 23:34:20 np0005603435 podman[223361]: 2026-01-31 04:34:20.995939075 +0000 UTC m=+0.427573118 container died a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 30 23:34:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0aee8e1963b514c42f7c9da9bc33f54722c415ea4790186791fc773fb0c1924f-merged.mount: Deactivated successfully.
Jan 30 23:34:21 np0005603435 podman[223361]: 2026-01-31 04:34:21.04082913 +0000 UTC m=+0.472463133 container remove a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:34:21 np0005603435 systemd[1]: libpod-conmon-a4d3a7fde9516cf7d8c71914e6d929eb259d22893eaabb241f1f2b0a26d218a2.scope: Deactivated successfully.
Jan 30 23:34:21 np0005603435 systemd-logind[816]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 30 23:34:21 np0005603435 systemd-logind[816]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 30 23:34:21 np0005603435 lvm[223483]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:34:21 np0005603435 lvm[223483]: VG ceph_vg1 finished
Jan 30 23:34:21 np0005603435 lvm[223484]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:34:21 np0005603435 lvm[223484]: VG ceph_vg2 finished
Jan 30 23:34:21 np0005603435 lvm[223485]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:34:21 np0005603435 lvm[223485]: VG ceph_vg0 finished
Jan 30 23:34:21 np0005603435 podman[223471]: 2026-01-31 04:34:21.20485268 +0000 UTC m=+0.064137554 container create 21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:34:21 np0005603435 systemd[1]: Started libpod-conmon-21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683.scope.
Jan 30 23:34:21 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:34:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c71e58a16d054f5d8d0637b7462c22e5bf05b16a6a088d5a946edcde02a89e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c71e58a16d054f5d8d0637b7462c22e5bf05b16a6a088d5a946edcde02a89e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c71e58a16d054f5d8d0637b7462c22e5bf05b16a6a088d5a946edcde02a89e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c71e58a16d054f5d8d0637b7462c22e5bf05b16a6a088d5a946edcde02a89e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:34:21 np0005603435 podman[223471]: 2026-01-31 04:34:21.178839202 +0000 UTC m=+0.038124106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:34:21 np0005603435 podman[223471]: 2026-01-31 04:34:21.2965659 +0000 UTC m=+0.155850824 container init 21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:34:21 np0005603435 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 30 23:34:21 np0005603435 podman[223471]: 2026-01-31 04:34:21.305865015 +0000 UTC m=+0.165149909 container start 21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:21 np0005603435 podman[223471]: 2026-01-31 04:34:21.309388725 +0000 UTC m=+0.168673639 container attach 21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:34:21 np0005603435 systemd[1]: Starting man-db-cache-update.service...
Jan 30 23:34:21 np0005603435 systemd[1]: Reloading.
Jan 30 23:34:21 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:34:21 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:34:21 np0005603435 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 30 23:34:21 np0005603435 lvm[224164]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:34:21 np0005603435 lvm[224164]: VG ceph_vg0 finished
Jan 30 23:34:21 np0005603435 lvm[224203]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:34:21 np0005603435 lvm[224203]: VG ceph_vg1 finished
Jan 30 23:34:21 np0005603435 lvm[224233]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:34:21 np0005603435 lvm[224233]: VG ceph_vg2 finished
Jan 30 23:34:21 np0005603435 gracious_hugle[223513]: {}
Jan 30 23:34:21 np0005603435 systemd[1]: libpod-21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683.scope: Deactivated successfully.
Jan 30 23:34:21 np0005603435 podman[223471]: 2026-01-31 04:34:21.988406503 +0000 UTC m=+0.847691417 container died 21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:34:22 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3c71e58a16d054f5d8d0637b7462c22e5bf05b16a6a088d5a946edcde02a89e2-merged.mount: Deactivated successfully.
Jan 30 23:34:22 np0005603435 podman[223471]: 2026-01-31 04:34:22.040170333 +0000 UTC m=+0.899455217 container remove 21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:34:22 np0005603435 systemd[1]: libpod-conmon-21d3b6535565fb5de151e951404833bac666065efc636ef5263591f11f5b8683.scope: Deactivated successfully.
Jan 30 23:34:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:34:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:34:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:34:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:34:22 np0005603435 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 30 23:34:22 np0005603435 systemd[1]: Finished man-db-cache-update.service.
Jan 30 23:34:22 np0005603435 systemd[1]: man-db-cache-update.service: Consumed 1.091s CPU time.
Jan 30 23:34:22 np0005603435 systemd[1]: run-r1b7f94f4c52b413b9a2be9eb84fcc3a2.service: Deactivated successfully.
Jan 30 23:34:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:34:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:34:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:22 np0005603435 python3.9[224967]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:34:22 np0005603435 systemd[1]: Stopping Open-iSCSI...
Jan 30 23:34:22 np0005603435 iscsid[217856]: iscsid shutting down.
Jan 30 23:34:22 np0005603435 systemd[1]: iscsid.service: Deactivated successfully.
Jan 30 23:34:22 np0005603435 systemd[1]: Stopped Open-iSCSI.
Jan 30 23:34:22 np0005603435 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 30 23:34:22 np0005603435 systemd[1]: Starting Open-iSCSI...
Jan 30 23:34:22 np0005603435 systemd[1]: Started Open-iSCSI.
Jan 30 23:34:23 np0005603435 python3.9[225123]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:34:23 np0005603435 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 30 23:34:23 np0005603435 multipathd[221826]: exit (signal)
Jan 30 23:34:23 np0005603435 multipathd[221826]: --------shut down-------
Jan 30 23:34:23 np0005603435 systemd[1]: multipathd.service: Deactivated successfully.
Jan 30 23:34:23 np0005603435 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 30 23:34:23 np0005603435 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 30 23:34:23 np0005603435 multipathd[225129]: --------start up--------
Jan 30 23:34:23 np0005603435 multipathd[225129]: read /etc/multipath.conf
Jan 30 23:34:23 np0005603435 multipathd[225129]: path checkers start up
Jan 30 23:34:23 np0005603435 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 30 23:34:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:24 np0005603435 podman[225260]: 2026-01-31 04:34:24.473842331 +0000 UTC m=+0.093732652 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:34:24 np0005603435 python3.9[225296]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 30 23:34:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:25 np0005603435 python3.9[225469]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:26 np0005603435 python3.9[225621]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 30 23:34:26 np0005603435 systemd[1]: Reloading.
Jan 30 23:34:26 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:34:26 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:34:27 np0005603435 python3.9[225806]: ansible-ansible.builtin.service_facts Invoked
Jan 30 23:34:27 np0005603435 network[225823]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 30 23:34:27 np0005603435 network[225824]: 'network-scripts' will be removed from distribution in near future.
Jan 30 23:34:27 np0005603435 network[225825]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 30 23:34:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:28 np0005603435 podman[225849]: 2026-01-31 04:34:28.883954606 +0000 UTC m=+0.074352836 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 30 23:34:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:32 np0005603435 python3.9[226119]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:33 np0005603435 python3.9[226272]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:34 np0005603435 python3.9[226425]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:34 np0005603435 python3.9[226578]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:35 np0005603435 python3.9[226731]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:36 np0005603435 python3.9[226884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:34:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:34:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:34:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:34:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:34:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:34:37 np0005603435 python3.9[227039]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:38 np0005603435 python3.9[227192]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:34:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:39 np0005603435 python3.9[227345]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:39 np0005603435 python3.9[227497]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:40 np0005603435 python3.9[227649]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:41 np0005603435 python3.9[227801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:41 np0005603435 python3.9[227953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:42 np0005603435 python3.9[228105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:42 np0005603435 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 30 23:34:43 np0005603435 python3.9[228258]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:43 np0005603435 python3.9[228410]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:44 np0005603435 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 30 23:34:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:44 np0005603435 python3.9[228563]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:45 np0005603435 python3.9[228715]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:45 np0005603435 python3.9[228867]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:46 np0005603435 python3.9[229019]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:47 np0005603435 python3.9[229171]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:47 np0005603435 python3.9[229323]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:48 np0005603435 python3.9[229475]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:49 np0005603435 python3.9[229627]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:34:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:49 np0005603435 python3.9[229779]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:50 np0005603435 python3.9[229931]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 30 23:34:51 np0005603435 python3.9[230083]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 30 23:34:51 np0005603435 systemd[1]: Reloading.
Jan 30 23:34:51 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:34:51 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:34:52 np0005603435 python3.9[230270]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:53 np0005603435 python3.9[230423]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:54 np0005603435 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 30 23:34:54 np0005603435 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 30 23:34:54 np0005603435 python3.9[230576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:54 np0005603435 podman[230703]: 2026-01-31 04:34:54.908522939 +0000 UTC m=+0.153438809 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 30 23:34:55 np0005603435 python3.9[230746]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:55 np0005603435 python3.9[230910]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:34:55.898 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:34:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:34:55.899 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:34:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:34:55.899 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:34:56 np0005603435 python3.9[231063]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:57 np0005603435 python3.9[231216]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:57 np0005603435 python3.9[231369]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 30 23:34:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:34:59 np0005603435 podman[231494]: 2026-01-31 04:34:59.067212939 +0000 UTC m=+0.068828265 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:34:59 np0005603435 python3.9[231538]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:34:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:34:59 np0005603435 python3.9[231694]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:00 np0005603435 python3.9[231846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:01 np0005603435 python3.9[231998]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:01 np0005603435 python3.9[232150]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:02 np0005603435 python3.9[232304]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:03 np0005603435 python3.9[232456]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:03 np0005603435 python3.9[232608]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:04 np0005603435 python3.9[232760]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:05 np0005603435 python3.9[232912]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:35:06
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data']
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:35:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:35:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:35:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:10 np0005603435 python3.9[233064]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 30 23:35:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 30 23:35:11 np0005603435 python3.9[233217]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 30 23:35:12 np0005603435 python3.9[233375]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 30 23:35:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 30 23:35:13 np0005603435 systemd-logind[816]: New session 51 of user zuul.
Jan 30 23:35:13 np0005603435 systemd[1]: Started Session 51 of User zuul.
Jan 30 23:35:13 np0005603435 systemd[1]: session-51.scope: Deactivated successfully.
Jan 30 23:35:13 np0005603435 systemd-logind[816]: Session 51 logged out. Waiting for processes to exit.
Jan 30 23:35:13 np0005603435 systemd-logind[816]: Removed session 51.
Jan 30 23:35:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:14 np0005603435 python3.9[233561]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 30 23:35:14 np0005603435 python3.9[233682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769834113.9907827-986-267760489056335/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:15 np0005603435 python3.9[233832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:16 np0005603435 python3.9[233908]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:16 np0005603435 python3.9[234058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:35:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:35:17 np0005603435 python3.9[234179]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769834116.2473714-986-128040953766152/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:17 np0005603435 python3.9[234329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:18 np0005603435 python3.9[234450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769834117.3632488-986-2368493904248/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 30 23:35:19 np0005603435 python3.9[234600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:19 np0005603435 python3.9[234721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769834118.710375-986-152101080634933/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:20 np0005603435 python3.9[234871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 30 23:35:21 np0005603435 python3.9[234992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769834120.0463347-986-94350490711302/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:21 np0005603435 python3.9[235144]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:35:22 np0005603435 python3.9[235346]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:35:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:35:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:35:23 np0005603435 python3.9[235579]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:35:23 np0005603435 podman[235592]: 2026-01-31 04:35:23.283845977 +0000 UTC m=+0.048931161 container create 91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 30 23:35:23 np0005603435 systemd[1]: Started libpod-conmon-91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496.scope.
Jan 30 23:35:23 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:35:23 np0005603435 podman[235592]: 2026-01-31 04:35:23.268402846 +0000 UTC m=+0.033488110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:35:23 np0005603435 podman[235592]: 2026-01-31 04:35:23.368569355 +0000 UTC m=+0.133654609 container init 91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_poincare, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:35:23 np0005603435 podman[235592]: 2026-01-31 04:35:23.375618603 +0000 UTC m=+0.140703817 container start 91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_poincare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:35:23 np0005603435 podman[235592]: 2026-01-31 04:35:23.379570964 +0000 UTC m=+0.144656178 container attach 91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 30 23:35:23 np0005603435 affectionate_poincare[235618]: 167 167
Jan 30 23:35:23 np0005603435 systemd[1]: libpod-91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496.scope: Deactivated successfully.
Jan 30 23:35:23 np0005603435 podman[235592]: 2026-01-31 04:35:23.382549509 +0000 UTC m=+0.147634723 container died 91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_poincare, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:35:23 np0005603435 systemd[1]: var-lib-containers-storage-overlay-13c503f2fc1305cfd35b3fc11a85da8cb8b5468fa415684cadd12d8475deee38-merged.mount: Deactivated successfully.
Jan 30 23:35:23 np0005603435 podman[235592]: 2026-01-31 04:35:23.427284953 +0000 UTC m=+0.192370167 container remove 91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_poincare, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:35:23 np0005603435 systemd[1]: libpod-conmon-91b3afe8b5f49d06d01400a44a5dfff70989ebbe0ba4224c39eb2be807e8c496.scope: Deactivated successfully.
Jan 30 23:35:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:35:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:35:23 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:35:23 np0005603435 podman[235689]: 2026-01-31 04:35:23.617569826 +0000 UTC m=+0.060205457 container create 9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_spence, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:35:23 np0005603435 systemd[1]: Started libpod-conmon-9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2.scope.
Jan 30 23:35:23 np0005603435 podman[235689]: 2026-01-31 04:35:23.59132218 +0000 UTC m=+0.033957861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:35:23 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:35:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa80b5e17e89cca40b9741beb60d7a906b7081e28f35581171129b3430e26b56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa80b5e17e89cca40b9741beb60d7a906b7081e28f35581171129b3430e26b56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa80b5e17e89cca40b9741beb60d7a906b7081e28f35581171129b3430e26b56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa80b5e17e89cca40b9741beb60d7a906b7081e28f35581171129b3430e26b56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa80b5e17e89cca40b9741beb60d7a906b7081e28f35581171129b3430e26b56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:23 np0005603435 podman[235689]: 2026-01-31 04:35:23.714326159 +0000 UTC m=+0.156961840 container init 9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_spence, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 30 23:35:23 np0005603435 podman[235689]: 2026-01-31 04:35:23.727693978 +0000 UTC m=+0.170329609 container start 9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:35:23 np0005603435 podman[235689]: 2026-01-31 04:35:23.732489609 +0000 UTC m=+0.175125240 container attach 9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:35:24 np0005603435 python3.9[235805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:24 np0005603435 heuristic_spence[235748]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:35:24 np0005603435 heuristic_spence[235748]: --> All data devices are unavailable
Jan 30 23:35:24 np0005603435 systemd[1]: libpod-9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2.scope: Deactivated successfully.
Jan 30 23:35:24 np0005603435 podman[235689]: 2026-01-31 04:35:24.211794749 +0000 UTC m=+0.654430360 container died 9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_spence, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:35:24 np0005603435 systemd[1]: var-lib-containers-storage-overlay-aa80b5e17e89cca40b9741beb60d7a906b7081e28f35581171129b3430e26b56-merged.mount: Deactivated successfully.
Jan 30 23:35:24 np0005603435 podman[235689]: 2026-01-31 04:35:24.2666709 +0000 UTC m=+0.709306541 container remove 9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:35:24 np0005603435 systemd[1]: libpod-conmon-9d315a85c982b05313f0d9fe42be98c82067183e7457b12ecc2396927e1729d2.scope: Deactivated successfully.
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.376731) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834124376783, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1892, "num_deletes": 252, "total_data_size": 3251158, "memory_usage": 3301464, "flush_reason": "Manual Compaction"}
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834124388262, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1825979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11809, "largest_seqno": 13700, "table_properties": {"data_size": 1819901, "index_size": 3089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15319, "raw_average_key_size": 20, "raw_value_size": 1806423, "raw_average_value_size": 2376, "num_data_blocks": 143, "num_entries": 760, "num_filter_entries": 760, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833907, "oldest_key_time": 1769833907, "file_creation_time": 1769834124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11587 microseconds, and 5461 cpu microseconds.
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.388316) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1825979 bytes OK
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.388337) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.389677) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.389744) EVENT_LOG_v1 {"time_micros": 1769834124389703, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.389767) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3243182, prev total WAL file size 3243182, number of live WAL files 2.
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.390576) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353035' seq:0, type:0; will stop at (end)
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1783KB)], [29(7837KB)]
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834124390650, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9851499, "oldest_snapshot_seqno": -1}
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4091 keys, 7854574 bytes, temperature: kUnknown
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834124449075, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7854574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7825282, "index_size": 17982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 97255, "raw_average_key_size": 23, "raw_value_size": 7749613, "raw_average_value_size": 1894, "num_data_blocks": 782, "num_entries": 4091, "num_filter_entries": 4091, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769834124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.449418) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7854574 bytes
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.450900) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 134.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4504, records dropped: 413 output_compression: NoCompression
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.450929) EVENT_LOG_v1 {"time_micros": 1769834124450915, "job": 12, "event": "compaction_finished", "compaction_time_micros": 58498, "compaction_time_cpu_micros": 24193, "output_level": 6, "num_output_files": 1, "total_output_size": 7854574, "num_input_records": 4504, "num_output_records": 4091, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834124451348, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834124452585, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.390480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.452641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.452649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.452654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.452658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:35:24 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:35:24.452662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:35:24 np0005603435 python3.9[236002]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769834123.5020523-1093-279519737561931/.source _original_basename=.n5rnjzcy follow=False checksum=d4c420bfb2aa1c3314cbf240daf10edfe57e6982 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 30 23:35:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:24 np0005603435 podman[236030]: 2026-01-31 04:35:24.794889281 +0000 UTC m=+0.066758494 container create 5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:35:24 np0005603435 systemd[1]: Started libpod-conmon-5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39.scope.
Jan 30 23:35:24 np0005603435 podman[236030]: 2026-01-31 04:35:24.762392517 +0000 UTC m=+0.034261710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:35:24 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:35:24 np0005603435 podman[236030]: 2026-01-31 04:35:24.903978156 +0000 UTC m=+0.175847349 container init 5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:35:24 np0005603435 podman[236030]: 2026-01-31 04:35:24.912855991 +0000 UTC m=+0.184725204 container start 5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:35:24 np0005603435 nice_allen[236058]: 167 167
Jan 30 23:35:24 np0005603435 systemd[1]: libpod-5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39.scope: Deactivated successfully.
Jan 30 23:35:24 np0005603435 conmon[236058]: conmon 5f8459b71493527462d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39.scope/container/memory.events
Jan 30 23:35:24 np0005603435 podman[236030]: 2026-01-31 04:35:24.932847018 +0000 UTC m=+0.204716231 container attach 5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_allen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:35:24 np0005603435 podman[236030]: 2026-01-31 04:35:24.934403947 +0000 UTC m=+0.206273160 container died 5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:35:25 np0005603435 systemd[1]: var-lib-containers-storage-overlay-aca0c102c6cccea9f5a2e7cd361b802952bea01f5e4a5594432e38e82432fe3a-merged.mount: Deactivated successfully.
Jan 30 23:35:25 np0005603435 podman[236030]: 2026-01-31 04:35:25.028639926 +0000 UTC m=+0.300509099 container remove 5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:35:25 np0005603435 systemd[1]: libpod-conmon-5f8459b71493527462d455ed92ece4d7813c073f1eb3d17fc2ec08c8a4832f39.scope: Deactivated successfully.
Jan 30 23:35:25 np0005603435 podman[236064]: 2026-01-31 04:35:25.102099568 +0000 UTC m=+0.115471408 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 30 23:35:25 np0005603435 podman[236161]: 2026-01-31 04:35:25.178100585 +0000 UTC m=+0.050101301 container create 4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lehmann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:35:25 np0005603435 systemd[1]: Started libpod-conmon-4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66.scope.
Jan 30 23:35:25 np0005603435 podman[236161]: 2026-01-31 04:35:25.149780197 +0000 UTC m=+0.021780933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:35:25 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:35:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3748ccbd90f4c672661a149931ccb508ec9c65c67889df19f18b931593e4f66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3748ccbd90f4c672661a149931ccb508ec9c65c67889df19f18b931593e4f66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3748ccbd90f4c672661a149931ccb508ec9c65c67889df19f18b931593e4f66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3748ccbd90f4c672661a149931ccb508ec9c65c67889df19f18b931593e4f66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:25 np0005603435 podman[236161]: 2026-01-31 04:35:25.283604389 +0000 UTC m=+0.155605125 container init 4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lehmann, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:35:25 np0005603435 podman[236161]: 2026-01-31 04:35:25.291985862 +0000 UTC m=+0.163986578 container start 4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lehmann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:35:25 np0005603435 podman[236161]: 2026-01-31 04:35:25.357858402 +0000 UTC m=+0.229859138 container attach 4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lehmann, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]: {
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:    "0": [
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:        {
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "devices": [
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "/dev/loop3"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            ],
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_name": "ceph_lv0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_size": "21470642176",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "name": "ceph_lv0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "tags": {
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cluster_name": "ceph",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.crush_device_class": "",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.encrypted": "0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.objectstore": "bluestore",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osd_id": "0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.type": "block",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.vdo": "0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.with_tpm": "0"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            },
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "type": "block",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "vg_name": "ceph_vg0"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:        }
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:    ],
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:    "1": [
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:        {
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "devices": [
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "/dev/loop4"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            ],
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_name": "ceph_lv1",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_size": "21470642176",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "name": "ceph_lv1",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "tags": {
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cluster_name": "ceph",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.crush_device_class": "",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.encrypted": "0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.objectstore": "bluestore",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osd_id": "1",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.type": "block",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.vdo": "0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.with_tpm": "0"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            },
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "type": "block",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "vg_name": "ceph_vg1"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:        }
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:    ],
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:    "2": [
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:        {
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "devices": [
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "/dev/loop5"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            ],
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_name": "ceph_lv2",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_size": "21470642176",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "name": "ceph_lv2",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "tags": {
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.cluster_name": "ceph",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.crush_device_class": "",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.encrypted": "0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.objectstore": "bluestore",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osd_id": "2",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.type": "block",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.vdo": "0",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:                "ceph.with_tpm": "0"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            },
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "type": "block",
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:            "vg_name": "ceph_vg2"
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:        }
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]:    ]
Jan 30 23:35:25 np0005603435 focused_lehmann[236206]: }
Jan 30 23:35:25 np0005603435 python3.9[236255]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:35:25 np0005603435 systemd[1]: libpod-4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66.scope: Deactivated successfully.
Jan 30 23:35:25 np0005603435 podman[236161]: 2026-01-31 04:35:25.621420213 +0000 UTC m=+0.493420929 container died 4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lehmann, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:35:25 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d3748ccbd90f4c672661a149931ccb508ec9c65c67889df19f18b931593e4f66-merged.mount: Deactivated successfully.
Jan 30 23:35:25 np0005603435 podman[236161]: 2026-01-31 04:35:25.885094297 +0000 UTC m=+0.757095033 container remove 4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:35:25 np0005603435 systemd[1]: libpod-conmon-4c9dc54fd97a2a8997be6c1058d85549aa9c6e2c99f74693cefb7201f0952f66.scope: Deactivated successfully.
Jan 30 23:35:26 np0005603435 python3.9[236474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:26 np0005603435 podman[236486]: 2026-01-31 04:35:26.407378897 +0000 UTC m=+0.067685257 container create f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:35:26 np0005603435 systemd[1]: Started libpod-conmon-f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc.scope.
Jan 30 23:35:26 np0005603435 podman[236486]: 2026-01-31 04:35:26.369417714 +0000 UTC m=+0.029724124 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:35:26 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:35:26 np0005603435 podman[236486]: 2026-01-31 04:35:26.491806057 +0000 UTC m=+0.152112387 container init f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 30 23:35:26 np0005603435 podman[236486]: 2026-01-31 04:35:26.499560194 +0000 UTC m=+0.159866514 container start f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:35:26 np0005603435 frosty_gould[236537]: 167 167
Jan 30 23:35:26 np0005603435 systemd[1]: libpod-f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc.scope: Deactivated successfully.
Jan 30 23:35:26 np0005603435 podman[236486]: 2026-01-31 04:35:26.506021127 +0000 UTC m=+0.166327447 container attach f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:35:26 np0005603435 podman[236486]: 2026-01-31 04:35:26.506464569 +0000 UTC m=+0.166770889 container died f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:35:26 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7343d5f33cb7646c7b132a6f378976aae083c4fa05318ccecf0592a216324eb5-merged.mount: Deactivated successfully.
Jan 30 23:35:26 np0005603435 podman[236486]: 2026-01-31 04:35:26.555515812 +0000 UTC m=+0.215822162 container remove f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_gould, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:35:26 np0005603435 systemd[1]: libpod-conmon-f5db611c0926f114da0c4012c77417b915f9e389cf151933a7a37bb3f57a7adc.scope: Deactivated successfully.
Jan 30 23:35:26 np0005603435 podman[236633]: 2026-01-31 04:35:26.732389316 +0000 UTC m=+0.060277299 container create e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:35:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:26 np0005603435 systemd[1]: Started libpod-conmon-e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac.scope.
Jan 30 23:35:26 np0005603435 podman[236633]: 2026-01-31 04:35:26.698507197 +0000 UTC m=+0.026395270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:35:26 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:35:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638d0b1b5e36d923176a007443e82d6f7a2b9024ee175791a59465d7446ef105/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638d0b1b5e36d923176a007443e82d6f7a2b9024ee175791a59465d7446ef105/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638d0b1b5e36d923176a007443e82d6f7a2b9024ee175791a59465d7446ef105/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638d0b1b5e36d923176a007443e82d6f7a2b9024ee175791a59465d7446ef105/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:26 np0005603435 podman[236633]: 2026-01-31 04:35:26.870182569 +0000 UTC m=+0.198070592 container init e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:35:26 np0005603435 podman[236633]: 2026-01-31 04:35:26.880057879 +0000 UTC m=+0.207945852 container start e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:35:26 np0005603435 podman[236633]: 2026-01-31 04:35:26.903048012 +0000 UTC m=+0.230936065 container attach e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hofstadter, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:35:26 np0005603435 python3.9[236658]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769834125.83411-1119-159658224503244/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:27 np0005603435 python3.9[236851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 30 23:35:27 np0005603435 lvm[236890]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:35:27 np0005603435 lvm[236890]: VG ceph_vg0 finished
Jan 30 23:35:27 np0005603435 lvm[236893]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:35:27 np0005603435 lvm[236893]: VG ceph_vg1 finished
Jan 30 23:35:27 np0005603435 lvm[236913]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:35:27 np0005603435 lvm[236913]: VG ceph_vg2 finished
Jan 30 23:35:27 np0005603435 cranky_hofstadter[236664]: {}
Jan 30 23:35:27 np0005603435 systemd[1]: libpod-e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac.scope: Deactivated successfully.
Jan 30 23:35:27 np0005603435 systemd[1]: libpod-e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac.scope: Consumed 1.134s CPU time.
Jan 30 23:35:27 np0005603435 podman[236633]: 2026-01-31 04:35:27.728215659 +0000 UTC m=+1.056103702 container died e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hofstadter, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:35:27 np0005603435 systemd[1]: var-lib-containers-storage-overlay-638d0b1b5e36d923176a007443e82d6f7a2b9024ee175791a59465d7446ef105-merged.mount: Deactivated successfully.
Jan 30 23:35:27 np0005603435 podman[236633]: 2026-01-31 04:35:27.980585166 +0000 UTC m=+1.308473179 container remove e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hofstadter, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:35:27 np0005603435 systemd[1]: libpod-conmon-e3a0cba3a5040dd6485efcb5497668e1f636738f7ed992050ba1a6e0748b15ac.scope: Deactivated successfully.
Jan 30 23:35:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:35:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:35:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:35:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:35:28 np0005603435 python3.9[237032]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769834127.0780249-1134-126855854423545/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 30 23:35:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:35:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:35:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:29 np0005603435 python3.9[237209]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 30 23:35:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:29 np0005603435 podman[237333]: 2026-01-31 04:35:29.983246373 +0000 UTC m=+0.081967679 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:35:30 np0005603435 python3.9[237376]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 30 23:35:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:31 np0005603435 python3[237532]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 30 23:35:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:35:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:35:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:35:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:35:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:35:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:35:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:44 np0005603435 podman[237547]: 2026-01-31 04:35:44.201073085 +0000 UTC m=+12.841353851 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 30 23:35:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:44 np0005603435 podman[237633]: 2026-01-31 04:35:44.317007824 +0000 UTC m=+0.029336745 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 30 23:35:44 np0005603435 podman[237633]: 2026-01-31 04:35:44.428628513 +0000 UTC m=+0.140957384 container create aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 30 23:35:44 np0005603435 python3[237532]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 30 23:35:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:45 np0005603435 python3.9[237821]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:35:46 np0005603435 python3.9[237975]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 30 23:35:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:47 np0005603435 python3.9[238127]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 30 23:35:48 np0005603435 python3[238279]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 30 23:35:48 np0005603435 podman[238318]: 2026-01-31 04:35:48.43572335 +0000 UTC m=+0.065184473 container create 3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, container_name=nova_compute, org.label-schema.license=GPLv2)
Jan 30 23:35:48 np0005603435 podman[238318]: 2026-01-31 04:35:48.39545887 +0000 UTC m=+0.024919993 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 30 23:35:48 np0005603435 python3[238279]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 30 23:35:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:49 np0005603435 python3.9[238508]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:35:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:50 np0005603435 python3.9[238662]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:35:50 np0005603435 python3.9[238813]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769834150.154757-1230-161253671444340/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 30 23:35:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:51 np0005603435 python3.9[238889]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 30 23:35:51 np0005603435 systemd[1]: Reloading.
Jan 30 23:35:51 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:35:51 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:35:52 np0005603435 python3.9[239000]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 30 23:35:52 np0005603435 systemd[1]: Reloading.
Jan 30 23:35:52 np0005603435 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 30 23:35:52 np0005603435 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 30 23:35:52 np0005603435 systemd[1]: Starting nova_compute container...
Jan 30 23:35:52 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:35:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:53 np0005603435 podman[239040]: 2026-01-31 04:35:53.114952655 +0000 UTC m=+0.502734876 container init 3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:35:53 np0005603435 podman[239040]: 2026-01-31 04:35:53.124909277 +0000 UTC m=+0.512691458 container start 3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + sudo -E kolla_set_configs
Jan 30 23:35:53 np0005603435 podman[239040]: nova_compute
Jan 30 23:35:53 np0005603435 systemd[1]: Started nova_compute container.
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Validating config file
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying service configuration files
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Deleting /etc/ceph
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Creating directory /etc/ceph
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/ceph
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Writing out command to execute
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 30 23:35:53 np0005603435 nova_compute[239056]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 30 23:35:53 np0005603435 nova_compute[239056]: ++ cat /run_command
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + CMD=nova-compute
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + ARGS=
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + sudo kolla_copy_cacerts
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + [[ ! -n '' ]]
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + . kolla_extend_start
Jan 30 23:35:53 np0005603435 nova_compute[239056]: Running command: 'nova-compute'
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + echo 'Running command: '\''nova-compute'\'''
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + umask 0022
Jan 30 23:35:53 np0005603435 nova_compute[239056]: + exec nova-compute
Jan 30 23:35:54 np0005603435 python3.9[239217]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:35:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:55 np0005603435 python3.9[239368]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:35:55 np0005603435 podman[239492]: 2026-01-31 04:35:55.632923194 +0000 UTC m=+0.109783664 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 30 23:35:55 np0005603435 python3.9[239531]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 30 23:35:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:35:55.900 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:35:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:35:55.900 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:35:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:35:55.900 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:35:56 np0005603435 nova_compute[239056]: 2026-01-31 04:35:56.655 239060 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 30 23:35:56 np0005603435 nova_compute[239056]: 2026-01-31 04:35:56.655 239060 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 30 23:35:56 np0005603435 nova_compute[239056]: 2026-01-31 04:35:56.656 239060 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 30 23:35:56 np0005603435 nova_compute[239056]: 2026-01-31 04:35:56.656 239060 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 30 23:35:56 np0005603435 python3.9[239697]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 30 23:35:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:56 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:35:56 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:35:56 np0005603435 nova_compute[239056]: 2026-01-31 04:35:56.811 239060 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:35:56 np0005603435 nova_compute[239056]: 2026-01-31 04:35:56.844 239060 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:35:56 np0005603435 nova_compute[239056]: 2026-01-31 04:35:56.844 239060 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.429 239060 INFO nova.virt.driver [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 30 23:35:57 np0005603435 python3.9[239876]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.697 239060 INFO nova.compute.provider_config [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 30 23:35:57 np0005603435 systemd[1]: Stopping nova_compute container...
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.712 239060 DEBUG oslo_concurrency.lockutils [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.712 239060 DEBUG oslo_concurrency.lockutils [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.713 239060 DEBUG oslo_concurrency.lockutils [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.713 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.714 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.714 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.714 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.715 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.715 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.715 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.716 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.716 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.716 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.717 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.717 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.717 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.718 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.718 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.719 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.719 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.719 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.720 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.720 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.720 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.721 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.721 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.721 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.722 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.722 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.722 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.723 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.723 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.724 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.724 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.724 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.725 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.725 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.725 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.726 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.726 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.726 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.727 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.727 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.728 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.728 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.728 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.729 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.729 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.730 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.730 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.730 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.731 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.731 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.731 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.731 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.732 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.732 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.732 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.732 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.732 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.733 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.733 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.733 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.733 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.733 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.734 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.734 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.734 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.734 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.734 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.735 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.735 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.735 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.735 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.735 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.736 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.736 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.736 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.736 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.736 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.737 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.737 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.737 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.737 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.737 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.738 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.738 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.738 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.738 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.739 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.739 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.739 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.739 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.739 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.740 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.740 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.740 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.740 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.741 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.741 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.741 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.741 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.742 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.742 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.742 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.742 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.742 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.743 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.743 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.743 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.743 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.743 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.744 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.744 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.744 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.744 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.745 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.745 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.745 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.745 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.745 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.746 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.746 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.746 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.746 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.746 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.747 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.747 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.747 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.747 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.747 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.748 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.748 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.748 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.748 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.749 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.749 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.749 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.749 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.749 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.749 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.750 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.750 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.750 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.750 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.750 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.751 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.751 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.751 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.751 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.751 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.752 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.752 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.752 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.752 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.753 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.753 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.753 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.753 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.754 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.754 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.754 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.755 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.755 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.755 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.755 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.755 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.756 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.756 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.756 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.756 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.756 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.757 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.757 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.757 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.758 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.758 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.758 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.759 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.759 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.759 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.759 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.760 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.760 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.760 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.760 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.761 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.761 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.761 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.761 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.762 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.762 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.762 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.762 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.762 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.763 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.763 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.763 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.763 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.763 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.764 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.764 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.764 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.764 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.764 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.765 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.765 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.765 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.765 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.765 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.766 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.766 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.766 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.766 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.766 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.767 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.767 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.767 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.767 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.767 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.768 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.768 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.768 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.768 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.768 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.769 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.769 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.769 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.769 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.769 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.770 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.770 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.770 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.770 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.771 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.771 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.771 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.771 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.771 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.772 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.772 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.772 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.773 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.773 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.773 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.773 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.773 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.774 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.774 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.774 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.775 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.775 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.775 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.775 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.775 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.776 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.776 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.776 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.776 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.776 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.777 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.777 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.777 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.777 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.777 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.778 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.778 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.778 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.778 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.779 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.779 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.779 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.779 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.779 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.780 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.780 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.780 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.780 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.780 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.781 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.781 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.781 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.781 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.781 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.782 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.782 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.782 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.782 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.782 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.783 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.783 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.783 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.783 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.783 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.784 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.784 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.784 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.784 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.785 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.785 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.785 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.785 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.785 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.786 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.786 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.786 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.786 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.787 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.787 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.787 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.787 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.787 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.787 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.788 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.788 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.788 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.788 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.788 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.788 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.788 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.789 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.789 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.789 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.789 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.789 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.789 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.789 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.790 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.790 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.790 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.790 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.790 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.790 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.791 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.791 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.791 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.791 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.791 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.791 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.791 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.792 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.792 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.792 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.792 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.792 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.792 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.792 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.793 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.793 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.793 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.793 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.793 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.793 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.794 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.794 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.794 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.794 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.794 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.795 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.795 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.795 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.795 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.795 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.795 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.796 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.796 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.796 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.796 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.796 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.797 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.797 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.797 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.797 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.797 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.797 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.798 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.798 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.798 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.798 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.798 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.799 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.799 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.799 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.799 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.799 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.799 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.800 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.800 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.800 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.800 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.800 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.800 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.801 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.801 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.801 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.801 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.801 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.801 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.802 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.802 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.802 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.802 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.802 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.802 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.802 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.803 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.803 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.803 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.803 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.803 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.803 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.804 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.804 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.804 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.804 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.804 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.804 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.805 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.805 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.805 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.805 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.805 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.805 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.805 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.805 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.806 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.806 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.806 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.806 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.806 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.806 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.806 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.807 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.807 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.807 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.807 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.807 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.807 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.807 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.808 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.808 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.808 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.808 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.808 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.808 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.808 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.809 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.809 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.809 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.809 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.809 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.809 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.809 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.810 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.810 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.810 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.810 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.810 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.810 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.810 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.811 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.811 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.811 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.811 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.811 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.812 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.812 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.812 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.812 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.812 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.812 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.813 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.813 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.813 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.813 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.813 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.813 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.813 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.814 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.814 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.814 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.814 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.814 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.815 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.815 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.815 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.815 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.815 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.815 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.815 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.816 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.816 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.816 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.816 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.816 239060 WARNING oslo_config.cfg [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 30 23:35:57 np0005603435 nova_compute[239056]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 30 23:35:57 np0005603435 nova_compute[239056]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 30 23:35:57 np0005603435 nova_compute[239056]: and ``live_migration_inbound_addr`` respectively.
Jan 30 23:35:57 np0005603435 nova_compute[239056]: ).  Its value may be silently ignored in the future.#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.816 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.817 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.817 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.817 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.817 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.817 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.817 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.817 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.818 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.818 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.818 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.818 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.818 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.818 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.819 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.819 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.819 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.819 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.819 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rbd_secret_uuid        = 95d2f419-0dd0-56f2-a094-353f8c7597ed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.819 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.819 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.820 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.820 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.820 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.820 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.820 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.820 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.821 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.821 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.821 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.821 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.821 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.821 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.822 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.822 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.822 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.822 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.822 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.822 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.823 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.823 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.823 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.823 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.823 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.823 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.823 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.824 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.824 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.824 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.824 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.824 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.824 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.824 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.825 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.825 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.825 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.825 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.825 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.825 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.825 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.826 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.826 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.826 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.826 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.826 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.826 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.827 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.827 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.827 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.827 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.827 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.827 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.828 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.828 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.828 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.828 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.828 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.828 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.828 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.829 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.829 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.829 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.829 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.829 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.829 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.829 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.830 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.830 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.830 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.830 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.830 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.830 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.831 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.831 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.831 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.831 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.831 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.831 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.831 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.832 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.832 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.832 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.832 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.832 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.832 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.832 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.833 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.833 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.833 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.833 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.833 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.833 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.833 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.834 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.834 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.834 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.834 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.834 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.834 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.834 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.835 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.835 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.835 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.835 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.835 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.835 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.835 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.836 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.836 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.836 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.836 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.836 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.836 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.837 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.837 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.837 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.837 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.837 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.837 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.838 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.838 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.838 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.838 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.838 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.838 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.839 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.839 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.839 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.839 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.839 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.839 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.839 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.840 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.840 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.840 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.840 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.840 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.840 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.840 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.841 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.841 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.841 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.841 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.841 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.841 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.841 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.842 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.842 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.842 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.842 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.842 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.842 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.842 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.843 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.843 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.843 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.843 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.843 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.843 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.844 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.844 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.844 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.844 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.844 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.844 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.844 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.845 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.845 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.845 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.845 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.845 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.845 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.845 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.846 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.846 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.846 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.846 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.846 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.846 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.847 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.847 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.847 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.847 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.847 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.847 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.847 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.848 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.848 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.848 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.848 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.848 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.848 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.848 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.849 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.849 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.849 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.849 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.849 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.849 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.849 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.850 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.850 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.850 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.850 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.850 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.850 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.851 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.851 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.851 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.851 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.851 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.851 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.852 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.852 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.852 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.852 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.852 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.852 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.853 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.853 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.853 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.853 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.853 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.853 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.853 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.854 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.854 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.854 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.854 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.854 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.855 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.855 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.855 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.855 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.855 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.856 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.856 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.856 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.856 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.856 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.856 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.856 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.857 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.857 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.857 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.857 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.857 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.857 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.857 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.858 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.858 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.858 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.858 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.858 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.859 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.859 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.859 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.859 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.859 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.859 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.859 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.860 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.860 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.860 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.860 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.860 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.860 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.860 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.861 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.861 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.861 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.861 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.861 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.861 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.861 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.862 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.862 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.862 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.862 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.862 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.862 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.863 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.863 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.863 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.863 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.863 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.863 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.863 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.864 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.864 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.864 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.864 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.864 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.864 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.864 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.865 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.865 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.865 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.865 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.865 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.865 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.865 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.866 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.866 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.866 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.866 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.866 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.866 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.866 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.867 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.867 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.867 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.867 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.867 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.867 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.868 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.868 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.868 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.868 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.868 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.868 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.869 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.869 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.869 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.869 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.869 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.869 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.869 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.870 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.870 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.870 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.870 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.870 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.870 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.870 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.871 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.871 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.871 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.871 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.871 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.871 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.871 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.872 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.872 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.872 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.872 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.872 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.872 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.872 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.873 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.873 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.873 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.873 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.873 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.873 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.873 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.874 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.874 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.874 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.874 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.874 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.874 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.875 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.875 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.875 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.875 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.875 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.875 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.876 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.876 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.876 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.876 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.876 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.876 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.876 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.877 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.877 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.877 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.877 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.877 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.877 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.877 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.878 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.878 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.878 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.878 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.878 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.878 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.879 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.879 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.879 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.879 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.879 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.879 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.879 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.880 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.880 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.880 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.880 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.880 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.880 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.881 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.881 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.881 239060 DEBUG oslo_service.service [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.882 239060 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.884 239060 DEBUG oslo_concurrency.lockutils [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.884 239060 DEBUG oslo_concurrency.lockutils [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:35:57 np0005603435 nova_compute[239056]: 2026-01-31 04:35:57.884 239060 DEBUG oslo_concurrency.lockutils [None req-01eac034-8b3f-4f1c-b1c3-54e93bc44328 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:35:58 np0005603435 systemd[1]: libpod-3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f.scope: Deactivated successfully.
Jan 30 23:35:58 np0005603435 systemd[1]: libpod-3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f.scope: Consumed 3.201s CPU time.
Jan 30 23:35:58 np0005603435 podman[239880]: 2026-01-31 04:35:58.221052061 +0000 UTC m=+0.510832120 container died 3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 30 23:35:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f-userdata-shm.mount: Deactivated successfully.
Jan 30 23:35:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8-merged.mount: Deactivated successfully.
Jan 30 23:35:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:35:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:35:59 np0005603435 podman[239880]: 2026-01-31 04:35:59.582304217 +0000 UTC m=+1.872084286 container cleanup 3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:35:59 np0005603435 podman[239880]: nova_compute
Jan 30 23:35:59 np0005603435 podman[239910]: nova_compute
Jan 30 23:35:59 np0005603435 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 30 23:35:59 np0005603435 systemd[1]: Stopped nova_compute container.
Jan 30 23:35:59 np0005603435 systemd[1]: Starting nova_compute container...
Jan 30 23:35:59 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:35:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088dacf289052c96e334fd7eba3d3fd6616518af2bb81786b58d028cdf85df8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 30 23:35:59 np0005603435 podman[239923]: 2026-01-31 04:35:59.862566732 +0000 UTC m=+0.174003732 container init 3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20260127)
Jan 30 23:35:59 np0005603435 podman[239923]: 2026-01-31 04:35:59.870265187 +0000 UTC m=+0.181702137 container start 3f0bfdaf6456e85b896372844814bf2e5b621e3173eb63d1bc6b5dc4dc56610f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:35:59 np0005603435 nova_compute[239938]: + sudo -E kolla_set_configs
Jan 30 23:35:59 np0005603435 podman[239923]: nova_compute
Jan 30 23:35:59 np0005603435 systemd[1]: Started nova_compute container.
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Validating config file
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying service configuration files
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /etc/ceph
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Creating directory /etc/ceph
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/ceph
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Writing out command to execute
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 30 23:35:59 np0005603435 nova_compute[239938]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 30 23:36:00 np0005603435 nova_compute[239938]: ++ cat /run_command
Jan 30 23:36:00 np0005603435 nova_compute[239938]: + CMD=nova-compute
Jan 30 23:36:00 np0005603435 nova_compute[239938]: + ARGS=
Jan 30 23:36:00 np0005603435 nova_compute[239938]: + sudo kolla_copy_cacerts
Jan 30 23:36:00 np0005603435 nova_compute[239938]: + [[ ! -n '' ]]
Jan 30 23:36:00 np0005603435 nova_compute[239938]: + . kolla_extend_start
Jan 30 23:36:00 np0005603435 nova_compute[239938]: Running command: 'nova-compute'
Jan 30 23:36:00 np0005603435 nova_compute[239938]: + echo 'Running command: '\''nova-compute'\'''
Jan 30 23:36:00 np0005603435 nova_compute[239938]: + umask 0022
Jan 30 23:36:00 np0005603435 nova_compute[239938]: + exec nova-compute
Jan 30 23:36:00 np0005603435 podman[239971]: 2026-01-31 04:36:00.092277135 +0000 UTC m=+0.060808123 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:36:00 np0005603435 python3.9[240120]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 30 23:36:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:00 np0005603435 systemd[1]: Started libpod-conmon-aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94.scope.
Jan 30 23:36:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:36:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e3db9deb20f04213c9a54f81c8d87f190164d0d97e454ad419d68fc6a354b3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e3db9deb20f04213c9a54f81c8d87f190164d0d97e454ad419d68fc6a354b3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e3db9deb20f04213c9a54f81c8d87f190164d0d97e454ad419d68fc6a354b3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:01 np0005603435 podman[240144]: 2026-01-31 04:36:01.191877569 +0000 UTC m=+0.360237173 container init aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 30 23:36:01 np0005603435 podman[240144]: 2026-01-31 04:36:01.200512068 +0000 UTC m=+0.368871612 container start aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init)
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Applying nova statedir ownership
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 30 23:36:01 np0005603435 nova_compute_init[240165]: INFO:nova_statedir:Nova statedir ownership complete
Jan 30 23:36:01 np0005603435 systemd[1]: libpod-aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94.scope: Deactivated successfully.
Jan 30 23:36:01 np0005603435 python3.9[240120]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 30 23:36:01 np0005603435 podman[240166]: 2026-01-31 04:36:01.352198943 +0000 UTC m=+0.067746248 container died aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute_init)
Jan 30 23:36:01 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94-userdata-shm.mount: Deactivated successfully.
Jan 30 23:36:01 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e6e3db9deb20f04213c9a54f81c8d87f190164d0d97e454ad419d68fc6a354b3-merged.mount: Deactivated successfully.
Jan 30 23:36:01 np0005603435 nova_compute[239938]: 2026-01-31 04:36:01.732 239942 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 30 23:36:01 np0005603435 nova_compute[239938]: 2026-01-31 04:36:01.732 239942 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 30 23:36:01 np0005603435 nova_compute[239938]: 2026-01-31 04:36:01.732 239942 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 30 23:36:01 np0005603435 nova_compute[239938]: 2026-01-31 04:36:01.733 239942 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 30 23:36:01 np0005603435 podman[240166]: 2026-01-31 04:36:01.835500445 +0000 UTC m=+0.551047700 container cleanup aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:36:01 np0005603435 systemd[1]: libpod-conmon-aae8ca37af335fecc0012f5269b39055952d00b53e552a7629b436221b5f9d94.scope: Deactivated successfully.
Jan 30 23:36:01 np0005603435 nova_compute[239938]: 2026-01-31 04:36:01.855 239942 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:36:01 np0005603435 nova_compute[239938]: 2026-01-31 04:36:01.877 239942 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:36:01 np0005603435 nova_compute[239938]: 2026-01-31 04:36:01.878 239942 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.431 239942 INFO nova.virt.driver [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 30 23:36:02 np0005603435 systemd[1]: session-50.scope: Deactivated successfully.
Jan 30 23:36:02 np0005603435 systemd[1]: session-50.scope: Consumed 1min 55.968s CPU time.
Jan 30 23:36:02 np0005603435 systemd-logind[816]: Session 50 logged out. Waiting for processes to exit.
Jan 30 23:36:02 np0005603435 systemd-logind[816]: Removed session 50.
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.558 239942 INFO nova.compute.provider_config [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.573 239942 DEBUG oslo_concurrency.lockutils [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.573 239942 DEBUG oslo_concurrency.lockutils [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.573 239942 DEBUG oslo_concurrency.lockutils [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.574 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.574 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.574 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.574 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.574 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.574 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.574 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.575 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.575 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.575 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.575 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.575 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.575 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.575 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.575 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.576 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.576 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.576 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.576 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.576 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.576 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.576 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.577 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.577 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.577 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.577 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.577 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.577 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.577 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.578 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.578 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.578 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.578 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.578 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.578 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.579 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.579 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.579 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.579 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.579 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.579 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.579 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.580 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.580 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.580 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.580 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.580 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.580 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.580 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.581 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.581 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.581 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.581 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.581 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.581 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.581 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.582 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.582 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.582 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.582 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.582 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.582 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.582 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.582 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.583 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.583 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.583 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.583 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.583 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.583 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.583 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.584 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.584 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.584 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.584 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.584 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.584 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.585 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.585 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.585 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.585 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.585 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.585 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.585 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.585 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.586 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.586 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.586 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.586 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.586 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.586 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.586 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.587 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.587 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.587 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.587 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.587 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.587 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.587 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.588 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.588 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.588 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.588 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.588 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.588 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.588 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.588 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.589 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.589 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.589 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.589 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.589 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.589 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.589 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.589 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.590 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.590 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.590 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.590 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.590 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.590 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.590 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.591 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.591 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.591 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.591 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.591 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.591 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.591 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.591 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.592 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.592 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.592 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.592 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.592 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.592 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.592 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.593 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.593 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.593 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.593 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.593 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.593 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.593 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.593 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.594 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.594 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.594 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.594 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.594 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.594 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.594 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.595 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.595 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.595 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.595 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.595 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.595 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.596 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.596 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.596 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.596 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.596 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.596 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.597 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.597 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.597 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.597 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.597 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.597 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.597 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.598 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.598 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.598 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.598 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.598 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.598 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.598 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.598 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.599 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.599 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.599 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.599 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.599 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.599 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.599 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.600 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.600 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.600 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.600 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.600 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.600 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.601 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.601 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.601 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.601 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.601 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.601 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.601 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.602 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.602 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.602 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.602 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.602 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.602 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.602 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.603 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.603 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.603 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.603 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.603 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.603 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.603 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.604 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.604 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.604 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.604 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.604 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.604 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.604 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.605 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.605 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.605 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.605 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.605 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.605 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.605 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.606 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.606 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.606 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.606 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.606 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.606 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.606 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.606 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.607 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.607 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.607 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.607 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.607 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.607 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.607 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.608 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.608 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.608 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.608 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.608 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.608 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.608 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.609 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.609 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.609 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.609 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.609 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.609 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.609 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.609 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.610 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.610 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.610 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.610 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.610 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.610 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.610 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.610 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.611 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.611 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.611 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.611 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.611 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.611 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.611 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.612 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.612 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.612 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.612 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.612 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.612 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.612 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.613 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.613 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.613 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.613 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.613 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.613 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.613 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.613 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.614 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.614 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.614 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.614 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.614 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.614 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.614 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.615 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.615 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.615 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.615 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.615 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.615 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.615 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.616 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.616 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.616 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.616 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.616 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.616 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.616 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.616 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.617 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.617 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.617 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.617 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.617 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.617 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.617 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.618 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.618 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.618 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.618 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.618 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.618 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.618 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.618 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.619 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.619 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.619 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.619 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.619 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.619 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.619 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.620 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.620 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.620 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.620 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.620 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.620 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.620 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.621 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.621 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.621 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.621 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.621 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.621 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.621 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.622 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.622 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.622 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.622 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.622 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.622 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.623 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.623 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.623 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.623 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.623 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.624 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.624 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.624 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.624 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.624 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.624 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.625 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.625 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.625 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.625 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.625 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.625 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.625 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.625 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.626 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.626 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.626 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.626 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.626 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.626 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.626 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.626 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.627 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.627 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.627 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.627 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.627 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.627 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.627 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.628 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.628 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.628 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.628 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.628 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.628 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.628 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.629 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.629 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.629 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.629 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.629 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.629 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.629 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.630 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.630 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.630 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.630 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.630 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.630 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.630 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.630 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.631 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.631 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.631 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.631 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.631 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.631 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.631 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.632 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.632 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.632 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.632 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.632 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.632 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.632 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.633 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.633 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.633 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.633 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.633 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.633 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.633 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.634 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.634 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.634 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.634 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.634 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.634 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.634 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.634 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.635 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.635 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.635 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.635 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.635 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.635 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.636 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.636 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.636 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.636 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.636 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.636 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.636 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.637 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.637 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.637 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.637 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.637 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.637 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.637 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.637 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.638 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.638 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.638 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.638 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.638 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.638 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.638 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.638 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.639 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.639 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.639 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.639 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.639 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.639 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.639 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.640 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.640 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.640 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.640 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.640 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.640 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.641 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.641 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.641 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.641 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.641 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.641 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.641 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.641 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.642 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.642 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.642 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.642 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.642 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.642 239942 WARNING oslo_config.cfg [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 30 23:36:02 np0005603435 nova_compute[239938]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 30 23:36:02 np0005603435 nova_compute[239938]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 30 23:36:02 np0005603435 nova_compute[239938]: and ``live_migration_inbound_addr`` respectively.
Jan 30 23:36:02 np0005603435 nova_compute[239938]: ).  Its value may be silently ignored in the future.#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.643 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.643 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.643 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.643 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.643 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.643 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.644 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.644 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.644 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.644 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.644 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.644 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.644 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.644 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.645 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.645 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.645 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.645 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.645 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rbd_secret_uuid        = 95d2f419-0dd0-56f2-a094-353f8c7597ed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.645 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.645 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.646 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.646 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.646 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.646 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.646 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.646 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.646 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.647 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.647 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.647 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.647 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.647 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.647 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.647 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.648 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.648 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.648 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.648 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.648 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.648 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.648 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.649 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.649 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.649 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.649 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.649 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.649 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.650 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.650 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.650 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.650 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.650 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.650 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.650 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.651 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.651 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.651 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.651 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.651 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.651 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.651 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.651 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.652 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.652 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.652 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.652 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.652 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.652 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.652 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.652 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.653 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.653 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.653 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.653 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.653 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.653 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.653 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.654 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.654 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.654 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.654 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.654 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.654 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.654 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.654 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.655 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.655 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.655 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.655 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.655 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.655 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.655 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.656 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.656 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.656 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.656 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.656 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.656 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.656 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.657 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.657 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.657 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.657 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.657 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.657 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.657 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.658 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.658 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.658 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.658 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.658 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.659 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.659 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.659 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.659 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.659 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.659 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.659 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.659 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.660 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.660 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.660 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.660 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.660 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.660 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.660 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.661 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.661 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.661 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.661 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.661 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.661 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.661 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.662 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.662 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.662 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.662 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.662 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.662 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.663 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.663 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.663 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.663 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.663 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.663 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.663 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.663 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.664 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.664 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.664 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.664 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.664 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.664 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.665 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.665 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.665 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.665 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.665 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.665 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.665 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.666 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.666 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.666 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.666 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.666 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.666 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.666 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.667 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.667 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.667 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.667 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.667 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.667 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.667 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.668 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.668 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.668 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.668 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.668 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.669 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.669 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.669 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.669 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.669 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.669 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.669 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.669 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.670 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.670 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.670 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.670 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.670 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.670 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.670 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.670 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.671 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.671 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.671 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.671 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.671 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.671 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.672 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.672 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.672 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.672 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.672 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.672 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.672 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.673 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.673 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.673 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.673 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.673 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.673 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.673 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.673 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.674 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.674 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.674 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.674 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.674 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.674 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.674 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.675 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.675 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.675 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.675 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.675 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.675 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.675 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.676 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.676 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.676 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.676 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.676 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.676 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.676 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.676 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.677 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.677 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.677 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.677 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.677 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.677 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.678 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.678 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.678 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.678 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.678 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.678 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.678 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.679 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.679 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.679 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.679 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.679 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.679 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.679 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.680 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.680 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.680 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.680 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.680 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.680 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.680 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.680 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.681 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.681 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.681 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.681 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.681 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.681 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.681 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.681 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.682 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.682 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.682 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.682 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.682 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.682 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.682 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.683 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.683 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.683 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.683 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.683 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.683 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.683 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.684 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.684 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.684 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.684 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.684 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.684 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.685 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.685 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.685 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.685 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.685 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.685 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.685 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.686 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.686 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.686 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.686 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.686 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.686 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.686 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.687 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.687 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.687 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.687 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.687 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.687 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.687 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.687 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.688 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.688 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.688 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.688 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.688 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.688 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.688 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.689 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.689 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.689 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.689 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.689 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.689 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.689 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.690 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.690 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.690 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.690 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.690 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.690 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.690 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.691 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.691 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.691 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.691 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.691 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.691 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.692 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.692 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.692 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.692 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.692 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.692 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.693 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.693 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.693 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.693 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.693 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.693 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.693 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.694 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.694 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.694 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.694 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.694 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.694 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.694 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.694 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.695 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.695 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.695 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.695 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.695 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.695 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.695 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.696 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.696 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.696 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.696 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.696 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.696 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.697 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.697 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.697 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.697 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.697 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.697 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.697 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.698 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.698 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.698 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.698 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.698 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.698 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.698 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.699 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.699 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.699 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.699 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.699 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.699 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.699 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.700 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.700 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.700 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.700 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.700 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.700 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.700 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.701 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.701 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.701 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.701 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.701 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.701 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.701 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.701 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.702 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.702 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.702 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.702 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.702 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.702 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.702 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.703 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.703 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.703 239942 DEBUG oslo_service.service [None req-90cf0a2d-ba58-447b-9f25-18d04e855d42 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.704 239942 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.721 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.722 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.722 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.722 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 30 23:36:02 np0005603435 systemd[1]: Starting libvirt QEMU daemon...
Jan 30 23:36:02 np0005603435 systemd[1]: Started libvirt QEMU daemon.
Jan 30 23:36:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.803 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4c35ba3e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.805 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4c35ba3e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.806 239942 INFO nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.827 239942 WARNING nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 30 23:36:02 np0005603435 nova_compute[239938]: 2026-01-31 04:36:02.827 239942 DEBUG nova.virt.libvirt.volume.mount [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.109 239942 INFO nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Libvirt host capabilities <capabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <host>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <uuid>e56e1981-badb-4c56-a12d-c458e4e6bca8</uuid>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <arch>x86_64</arch>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model>EPYC-Rome-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <vendor>AMD</vendor>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <microcode version='16777317'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <signature family='23' model='49' stepping='0'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='x2apic'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='tsc-deadline'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='osxsave'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='hypervisor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='tsc_adjust'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='spec-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='stibp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='arch-capabilities'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='cmp_legacy'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='topoext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='virt-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='lbrv'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='tsc-scale'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='vmcb-clean'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='pause-filter'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='pfthreshold'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='svme-addr-chk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='rdctl-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='skip-l1dfl-vmentry'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='mds-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature name='pschange-mc-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <pages unit='KiB' size='4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <pages unit='KiB' size='2048'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <pages unit='KiB' size='1048576'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <power_management>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <suspend_mem/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </power_management>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <iommu support='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <migration_features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <live/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <uri_transports>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <uri_transport>tcp</uri_transport>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <uri_transport>rdma</uri_transport>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </uri_transports>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </migration_features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <topology>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <cells num='1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <cell id='0'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:          <memory unit='KiB'>7864292</memory>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:          <pages unit='KiB' size='4'>1966073</pages>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:          <pages unit='KiB' size='2048'>0</pages>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:          <distances>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <sibling id='0' value='10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:          </distances>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:          <cpus num='8'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:          </cpus>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        </cell>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </cells>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </topology>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <cache>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </cache>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <secmodel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model>selinux</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <doi>0</doi>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </secmodel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <secmodel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model>dac</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <doi>0</doi>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </secmodel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </host>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <guest>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <os_type>hvm</os_type>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <arch name='i686'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <wordsize>32</wordsize>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <domain type='qemu'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <domain type='kvm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </arch>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <pae/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <nonpae/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <acpi default='on' toggle='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <apic default='on' toggle='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <cpuselection/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <deviceboot/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <disksnapshot default='on' toggle='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <externalSnapshot/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </guest>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <guest>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <os_type>hvm</os_type>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <arch name='x86_64'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <wordsize>64</wordsize>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <domain type='qemu'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <domain type='kvm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </arch>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <acpi default='on' toggle='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <apic default='on' toggle='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <cpuselection/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <deviceboot/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <disksnapshot default='on' toggle='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <externalSnapshot/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </guest>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 
Jan 30 23:36:04 np0005603435 nova_compute[239938]: </capabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: #033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.117 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.158 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 30 23:36:04 np0005603435 nova_compute[239938]: <domainCapabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <path>/usr/libexec/qemu-kvm</path>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <domain>kvm</domain>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <arch>i686</arch>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <vcpu max='4096'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <iothreads supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <os supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <enum name='firmware'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <loader supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>rom</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pflash</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='readonly'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>yes</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>no</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='secure'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>no</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </loader>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='host-passthrough' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='hostPassthroughMigratable'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>on</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>off</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='maximum' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='maximumMigratable'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>on</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>off</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='host-model' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <vendor>AMD</vendor>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='x2apic'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc-deadline'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='hypervisor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc_adjust'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='spec-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='stibp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='cmp_legacy'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='overflow-recov'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='succor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='amd-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='virt-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='lbrv'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc-scale'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='vmcb-clean'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='flushbyasid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='pause-filter'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='pfthreshold'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='svme-addr-chk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='disable' name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='custom' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='ClearwaterForest'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ddpd-u'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sha512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='ClearwaterForest-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ddpd-u'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sha512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Dhyana-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Turin'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vp2intersect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibpb-brtype'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbpb'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='srso-user-kernel-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Turin-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vp2intersect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibpb-brtype'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbpb'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='srso-user-kernel-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-128'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-256'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-128'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-256'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v6'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v7'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='KnightsMill'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4fmaps'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4vnniw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512er'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512pf'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='KnightsMill-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4fmaps'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4vnniw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512er'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512pf'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G4-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tbm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G5-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tbm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='athlon'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='athlon-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='core2duo'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='core2duo-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='coreduo'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='coreduo-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='n270'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='n270-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='phenom'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='phenom-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <memoryBacking supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <enum name='sourceType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>file</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>anonymous</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>memfd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </memoryBacking>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <disk supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='diskDevice'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>disk</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>cdrom</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>floppy</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>lun</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='bus'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>fdc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>scsi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>sata</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-non-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <graphics supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vnc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>egl-headless</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dbus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </graphics>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <video supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='modelType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vga</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>cirrus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>none</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>bochs</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ramfb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <hostdev supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='mode'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>subsystem</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='startupPolicy'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>default</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>mandatory</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>requisite</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>optional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='subsysType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pci</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>scsi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='capsType'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='pciBackend'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </hostdev>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <rng supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-non-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>random</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>egd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>builtin</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <filesystem supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='driverType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>path</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>handle</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtiofs</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </filesystem>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <tpm supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tpm-tis</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tpm-crb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>emulator</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>external</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendVersion'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>2.0</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </tpm>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <redirdev supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='bus'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </redirdev>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <channel supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pty</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>unix</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </channel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <crypto supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>qemu</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>builtin</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </crypto>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <interface supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>default</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>passt</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <panic supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>isa</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>hyperv</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </panic>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <console supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>null</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pty</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dev</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>file</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pipe</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>stdio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>udp</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tcp</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>unix</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>qemu-vdagent</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dbus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </console>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <gic supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <vmcoreinfo supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <genid supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <backingStoreInput supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <backup supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <async-teardown supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <s390-pv supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <ps2 supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <tdx supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <sev supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <sgx supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <hyperv supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='features'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>relaxed</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vapic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>spinlocks</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vpindex</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>runtime</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>synic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>stimer</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>reset</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vendor_id</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>frequencies</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>reenlightenment</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tlbflush</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ipi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>avic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>emsr_bitmap</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>xmm_input</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <defaults>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <spinlocks>4095</spinlocks>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <stimer_direct>on</stimer_direct>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <tlbflush_direct>on</tlbflush_direct>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <tlbflush_extended>on</tlbflush_extended>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </defaults>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </hyperv>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <launchSecurity supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: </domainCapabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.167 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 30 23:36:04 np0005603435 nova_compute[239938]: <domainCapabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <path>/usr/libexec/qemu-kvm</path>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <domain>kvm</domain>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <arch>i686</arch>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <vcpu max='240'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <iothreads supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <os supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <enum name='firmware'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <loader supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>rom</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pflash</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='readonly'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>yes</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>no</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='secure'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>no</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </loader>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='host-passthrough' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='hostPassthroughMigratable'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>on</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>off</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='maximum' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='maximumMigratable'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>on</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>off</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='host-model' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <vendor>AMD</vendor>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='x2apic'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc-deadline'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='hypervisor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc_adjust'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='spec-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='stibp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='cmp_legacy'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='overflow-recov'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='succor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='amd-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='virt-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='lbrv'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc-scale'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='vmcb-clean'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='flushbyasid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='pause-filter'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='pfthreshold'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='svme-addr-chk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='disable' name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='custom' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='ClearwaterForest'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ddpd-u'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sha512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='ClearwaterForest-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ddpd-u'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sha512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Dhyana-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Turin'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vp2intersect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibpb-brtype'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbpb'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='srso-user-kernel-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Turin-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vp2intersect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibpb-brtype'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbpb'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='srso-user-kernel-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-128'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-256'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-128'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-256'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v6'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v7'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='KnightsMill'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4fmaps'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4vnniw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512er'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512pf'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='KnightsMill-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4fmaps'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4vnniw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512er'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512pf'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G4-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tbm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G5-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tbm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='athlon'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='athlon-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='core2duo'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='core2duo-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='coreduo'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='coreduo-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='n270'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='n270-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='phenom'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='phenom-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <memoryBacking supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <enum name='sourceType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>file</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>anonymous</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>memfd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </memoryBacking>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <disk supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='diskDevice'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>disk</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>cdrom</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>floppy</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>lun</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='bus'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ide</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>fdc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>scsi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>sata</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-non-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <graphics supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vnc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>egl-headless</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dbus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </graphics>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <video supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='modelType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vga</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>cirrus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>none</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>bochs</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ramfb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <hostdev supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='mode'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>subsystem</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='startupPolicy'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>default</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>mandatory</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>requisite</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>optional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='subsysType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pci</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>scsi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='capsType'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='pciBackend'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </hostdev>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <rng supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-non-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>random</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>egd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>builtin</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <filesystem supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='driverType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>path</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>handle</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtiofs</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </filesystem>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <tpm supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tpm-tis</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tpm-crb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>emulator</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>external</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendVersion'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>2.0</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </tpm>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <redirdev supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='bus'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </redirdev>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <channel supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pty</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>unix</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </channel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <crypto supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>qemu</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>builtin</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </crypto>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <interface supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>default</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>passt</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <panic supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>isa</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>hyperv</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </panic>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <console supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>null</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pty</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dev</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>file</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pipe</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>stdio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>udp</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tcp</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>unix</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>qemu-vdagent</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dbus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </console>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <gic supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <vmcoreinfo supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <genid supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <backingStoreInput supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <backup supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <async-teardown supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <s390-pv supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <ps2 supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <tdx supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <sev supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <sgx supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <hyperv supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='features'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>relaxed</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vapic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>spinlocks</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vpindex</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>runtime</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>synic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>stimer</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>reset</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vendor_id</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>frequencies</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>reenlightenment</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tlbflush</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ipi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>avic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>emsr_bitmap</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>xmm_input</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <defaults>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <spinlocks>4095</spinlocks>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <stimer_direct>on</stimer_direct>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <tlbflush_direct>on</tlbflush_direct>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <tlbflush_extended>on</tlbflush_extended>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </defaults>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </hyperv>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <launchSecurity supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: </domainCapabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.244 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.251 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 30 23:36:04 np0005603435 nova_compute[239938]: <domainCapabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <path>/usr/libexec/qemu-kvm</path>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <domain>kvm</domain>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <arch>x86_64</arch>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <vcpu max='4096'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <iothreads supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <os supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <enum name='firmware'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>efi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <loader supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>rom</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pflash</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='readonly'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>yes</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>no</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='secure'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>yes</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>no</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </loader>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='host-passthrough' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='hostPassthroughMigratable'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>on</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>off</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='maximum' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='maximumMigratable'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>on</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>off</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='host-model' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <vendor>AMD</vendor>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='x2apic'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc-deadline'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='hypervisor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc_adjust'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='spec-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='stibp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='cmp_legacy'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='overflow-recov'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='succor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='amd-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='virt-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='lbrv'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc-scale'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='vmcb-clean'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='flushbyasid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='pause-filter'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='pfthreshold'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='svme-addr-chk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='disable' name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='custom' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='ClearwaterForest'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ddpd-u'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sha512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='ClearwaterForest-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ddpd-u'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sha512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Dhyana-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Turin'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vp2intersect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibpb-brtype'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbpb'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='srso-user-kernel-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Turin-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vp2intersect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibpb-brtype'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbpb'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='srso-user-kernel-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-128'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-256'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-128'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-256'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v6'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v7'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='KnightsMill'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4fmaps'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4vnniw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512er'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512pf'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='KnightsMill-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4fmaps'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4vnniw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512er'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512pf'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G4-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tbm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G5-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tbm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='athlon'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='athlon-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='core2duo'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='core2duo-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='coreduo'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='coreduo-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='n270'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='n270-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='phenom'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='phenom-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <memoryBacking supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <enum name='sourceType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>file</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>anonymous</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>memfd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </memoryBacking>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <disk supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='diskDevice'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>disk</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>cdrom</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>floppy</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>lun</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='bus'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>fdc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>scsi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>sata</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-non-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <graphics supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vnc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>egl-headless</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dbus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </graphics>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <video supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='modelType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vga</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>cirrus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>none</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>bochs</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ramfb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <hostdev supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='mode'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>subsystem</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='startupPolicy'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>default</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>mandatory</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>requisite</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>optional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='subsysType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pci</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>scsi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='capsType'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='pciBackend'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </hostdev>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <rng supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-non-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>random</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>egd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>builtin</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <filesystem supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='driverType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>path</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>handle</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtiofs</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </filesystem>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <tpm supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tpm-tis</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tpm-crb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>emulator</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>external</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendVersion'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>2.0</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </tpm>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <redirdev supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='bus'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </redirdev>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <channel supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pty</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>unix</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </channel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <crypto supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>qemu</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>builtin</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </crypto>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <interface supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>default</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>passt</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <panic supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>isa</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>hyperv</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </panic>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <console supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>null</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pty</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dev</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>file</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pipe</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>stdio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>udp</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tcp</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>unix</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>qemu-vdagent</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dbus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </console>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <gic supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <vmcoreinfo supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <genid supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <backingStoreInput supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <backup supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <async-teardown supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <s390-pv supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <ps2 supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <tdx supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <sev supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <sgx supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <hyperv supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='features'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>relaxed</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vapic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>spinlocks</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vpindex</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>runtime</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>synic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>stimer</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>reset</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vendor_id</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>frequencies</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>reenlightenment</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tlbflush</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ipi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>avic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>emsr_bitmap</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>xmm_input</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <defaults>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <spinlocks>4095</spinlocks>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <stimer_direct>on</stimer_direct>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <tlbflush_direct>on</tlbflush_direct>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <tlbflush_extended>on</tlbflush_extended>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </defaults>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </hyperv>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <launchSecurity supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: </domainCapabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.313 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 30 23:36:04 np0005603435 nova_compute[239938]: <domainCapabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <path>/usr/libexec/qemu-kvm</path>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <domain>kvm</domain>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <arch>x86_64</arch>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <vcpu max='240'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <iothreads supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <os supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <enum name='firmware'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <loader supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>rom</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pflash</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='readonly'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>yes</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>no</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='secure'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>no</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </loader>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='host-passthrough' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='hostPassthroughMigratable'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>on</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>off</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='maximum' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='maximumMigratable'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>on</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>off</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='host-model' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <vendor>AMD</vendor>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='x2apic'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc-deadline'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='hypervisor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc_adjust'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='spec-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='stibp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='cmp_legacy'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='overflow-recov'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='succor'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='amd-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='virt-ssbd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='lbrv'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='tsc-scale'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='vmcb-clean'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='flushbyasid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='pause-filter'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='pfthreshold'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='svme-addr-chk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <feature policy='disable' name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <mode name='custom' supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Broadwell-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cascadelake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='ClearwaterForest'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ddpd-u'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sha512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='ClearwaterForest-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ddpd-u'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sha512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm3'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sm4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Cooperlake-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Denverton-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Dhyana-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Genoa-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Milan-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Rome-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Turin'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vp2intersect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibpb-brtype'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbpb'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='srso-user-kernel-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-Turin-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amd-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='auto-ibrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vp2intersect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fs-gs-base-ns'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibpb-brtype'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='no-nested-data-bp'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='null-sel-clr-base'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='perfmon-v2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbpb'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='srso-user-kernel-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='stibp-always-on'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='EPYC-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-128'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-256'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='GraniteRapids-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-128'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-256'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx10-512'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='prefetchiti'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Haswell-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-noTSX'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v6'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Icelake-Server-v7'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='IvyBridge-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='KnightsMill'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4fmaps'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4vnniw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512er'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512pf'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='KnightsMill-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4fmaps'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-4vnniw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512er'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512pf'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G4-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tbm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Opteron_G5-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fma4'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tbm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xop'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SapphireRapids-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='amx-tile'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-bf16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-fp16'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512-vpopcntdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bitalg'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vbmi2'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrc'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fzrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='la57'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='taa-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='tsx-ldtrk'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='SierraForest-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ifma'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-ne-convert'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx-vnni-int8'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bhi-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='bus-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cmpccxadd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fbsdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='fsrs'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ibrs-all'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='intel-psfd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ipred-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='lam'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mcdt-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pbrsb-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='psdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rrsba-ctrl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='sbdr-ssdp-no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='serialize'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vaes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='vpclmulqdq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Client-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='hle'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='rtm'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Skylake-Server-v5'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512bw'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512cd'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512dq'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512f'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='avx512vl'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='invpcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pcid'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='pku'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='mpx'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v2'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v3'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='core-capability'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='split-lock-detect'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='Snowridge-v4'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='cldemote'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='erms'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='gfni'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdir64b'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='movdiri'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='xsaves'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='athlon'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='athlon-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='core2duo'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='core2duo-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='coreduo'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='coreduo-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='n270'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='n270-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='ss'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='phenom'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <blockers model='phenom-v1'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnow'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <feature name='3dnowext'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </blockers>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </mode>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <memoryBacking supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <enum name='sourceType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>file</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>anonymous</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <value>memfd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </memoryBacking>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <disk supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='diskDevice'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>disk</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>cdrom</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>floppy</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>lun</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='bus'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ide</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>fdc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>scsi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>sata</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-non-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <graphics supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vnc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>egl-headless</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dbus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </graphics>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <video supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='modelType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vga</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>cirrus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>none</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>bochs</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ramfb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <hostdev supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='mode'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>subsystem</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='startupPolicy'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>default</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>mandatory</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>requisite</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>optional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='subsysType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pci</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>scsi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='capsType'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='pciBackend'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </hostdev>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <rng supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtio-non-transitional</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>random</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>egd</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>builtin</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <filesystem supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='driverType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>path</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>handle</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>virtiofs</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </filesystem>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <tpm supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tpm-tis</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tpm-crb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>emulator</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>external</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendVersion'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>2.0</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </tpm>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <redirdev supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='bus'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>usb</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </redirdev>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <channel supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pty</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>unix</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </channel>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <crypto supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>qemu</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendModel'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>builtin</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </crypto>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <interface supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='backendType'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>default</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>passt</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <panic supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='model'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>isa</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>hyperv</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </panic>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <console supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='type'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>null</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vc</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pty</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dev</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>file</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>pipe</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>stdio</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>udp</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tcp</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>unix</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>qemu-vdagent</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>dbus</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </console>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <gic supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <vmcoreinfo supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <genid supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <backingStoreInput supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <backup supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <async-teardown supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <s390-pv supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <ps2 supported='yes'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <tdx supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <sev supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <sgx supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <hyperv supported='yes'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <enum name='features'>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>relaxed</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vapic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>spinlocks</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vpindex</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>runtime</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>synic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>stimer</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>reset</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>vendor_id</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>frequencies</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>reenlightenment</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>tlbflush</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>ipi</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>avic</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>emsr_bitmap</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <value>xmm_input</value>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </enum>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      <defaults>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <spinlocks>4095</spinlocks>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <stimer_direct>on</stimer_direct>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <tlbflush_direct>on</tlbflush_direct>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <tlbflush_extended>on</tlbflush_extended>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:      </defaults>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    </hyperv>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:    <launchSecurity supported='no'/>
Jan 30 23:36:04 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: </domainCapabilities>
Jan 30 23:36:04 np0005603435 nova_compute[239938]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.376 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.376 239942 INFO nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Secure Boot support detected#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.378 239942 INFO nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.378 239942 INFO nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.388 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.429 239942 INFO nova.virt.node [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Determined node identity 4d0a6937-09c9-4e01-94bd-2812940db2bc from /var/lib/nova/compute_id#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.504 239942 WARNING nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Compute nodes ['4d0a6937-09c9-4e01-94bd-2812940db2bc'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.709 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.744 239942 WARNING nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.744 239942 DEBUG oslo_concurrency.lockutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.745 239942 DEBUG oslo_concurrency.lockutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.745 239942 DEBUG oslo_concurrency.lockutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.745 239942 DEBUG nova.compute.resource_tracker [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:36:04 np0005603435 nova_compute[239938]: 2026-01-31 04:36:04.746 239942 DEBUG oslo_concurrency.processutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:36:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:36:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275063703' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.301 239942 DEBUG oslo_concurrency.processutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:36:05 np0005603435 systemd[1]: Starting libvirt nodedev daemon...
Jan 30 23:36:05 np0005603435 systemd[1]: Started libvirt nodedev daemon.
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.648 239942 WARNING nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.650 239942 DEBUG nova.compute.resource_tracker [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5047MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.650 239942 DEBUG oslo_concurrency.lockutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.650 239942 DEBUG oslo_concurrency.lockutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.665 239942 WARNING nova.compute.resource_tracker [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] No compute node record for compute-0.ctlplane.example.com:4d0a6937-09c9-4e01-94bd-2812940db2bc: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 4d0a6937-09c9-4e01-94bd-2812940db2bc could not be found.#033[00m
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.683 239942 INFO nova.compute.resource_tracker [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 4d0a6937-09c9-4e01-94bd-2812940db2bc#033[00m
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.803 239942 DEBUG nova.compute.resource_tracker [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:36:05 np0005603435 nova_compute[239938]: 2026-01-31 04:36:05.804 239942 DEBUG nova.compute.resource_tracker [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:36:06
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:06 np0005603435 nova_compute[239938]: 2026-01-31 04:36:06.823 239942 INFO nova.scheduler.client.report [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [req-2add05d1-c434-4b82-b3ac-11ade161e5c4] Created resource provider record via placement API for resource provider with UUID 4d0a6937-09c9-4e01-94bd-2812940db2bc and name compute-0.ctlplane.example.com.#033[00m
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:36:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:36:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.244 239942 DEBUG oslo_concurrency.processutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:36:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:36:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637800550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.792 239942 DEBUG oslo_concurrency.processutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.799 239942 DEBUG nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 30 23:36:07 np0005603435 nova_compute[239938]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.800 239942 INFO nova.virt.libvirt.host [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.802 239942 DEBUG nova.compute.provider_tree [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.803 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.879 239942 DEBUG nova.scheduler.client.report [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Updated inventory for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.880 239942 DEBUG nova.compute.provider_tree [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Updating resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.881 239942 DEBUG nova.compute.provider_tree [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:36:07 np0005603435 nova_compute[239938]: 2026-01-31 04:36:07.993 239942 DEBUG nova.compute.provider_tree [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Updating resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 30 23:36:08 np0005603435 nova_compute[239938]: 2026-01-31 04:36:08.021 239942 DEBUG nova.compute.resource_tracker [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:36:08 np0005603435 nova_compute[239938]: 2026-01-31 04:36:08.021 239942 DEBUG oslo_concurrency.lockutils [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:36:08 np0005603435 nova_compute[239938]: 2026-01-31 04:36:08.021 239942 DEBUG nova.service [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 30 23:36:08 np0005603435 nova_compute[239938]: 2026-01-31 04:36:08.118 239942 DEBUG nova.service [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 30 23:36:08 np0005603435 nova_compute[239938]: 2026-01-31 04:36:08.119 239942 DEBUG nova.servicegroup.drivers.db [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 30 23:36:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:36:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:36:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:23 np0005603435 nova_compute[239938]: 2026-01-31 04:36:23.120 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:36:23 np0005603435 nova_compute[239938]: 2026-01-31 04:36:23.204 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:36:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631831437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631831437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3561009215' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3561009215' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1712845199' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:36:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1712845199' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:36:26 np0005603435 podman[240366]: 2026-01-31 04:36:26.143087124 +0000 UTC m=+0.111258641 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:36:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:36:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:36:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:36:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:36:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:36:29 np0005603435 podman[240535]: 2026-01-31 04:36:29.332566406 +0000 UTC m=+0.060740671 container create b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:36:29 np0005603435 systemd[1]: Started libpod-conmon-b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580.scope.
Jan 30 23:36:29 np0005603435 podman[240535]: 2026-01-31 04:36:29.307500231 +0000 UTC m=+0.035674546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:36:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:36:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:29 np0005603435 podman[240535]: 2026-01-31 04:36:29.430670253 +0000 UTC m=+0.158844568 container init b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:36:29 np0005603435 podman[240535]: 2026-01-31 04:36:29.441955159 +0000 UTC m=+0.170129424 container start b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:36:29 np0005603435 podman[240535]: 2026-01-31 04:36:29.446156636 +0000 UTC m=+0.174330961 container attach b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:36:29 np0005603435 frosty_edison[240551]: 167 167
Jan 30 23:36:29 np0005603435 systemd[1]: libpod-b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580.scope: Deactivated successfully.
Jan 30 23:36:29 np0005603435 podman[240535]: 2026-01-31 04:36:29.451887231 +0000 UTC m=+0.180061506 container died b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:36:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9b5741d02be2f9ff92bad7e3977e8418b3a92a903df4ba046828bfd661538dab-merged.mount: Deactivated successfully.
Jan 30 23:36:29 np0005603435 podman[240535]: 2026-01-31 04:36:29.522598633 +0000 UTC m=+0.250772898 container remove b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 30 23:36:29 np0005603435 systemd[1]: libpod-conmon-b372c94572cd391c1ffbcd6fe26a410a11a5cdd9b613496b9575e82089f97580.scope: Deactivated successfully.
Jan 30 23:36:29 np0005603435 podman[240577]: 2026-01-31 04:36:29.705562511 +0000 UTC m=+0.063210193 container create 02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:36:29 np0005603435 systemd[1]: Started libpod-conmon-02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6.scope.
Jan 30 23:36:29 np0005603435 podman[240577]: 2026-01-31 04:36:29.679179863 +0000 UTC m=+0.036827585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:36:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:36:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4833fba02e49059799236f9df3e3b7629725559910712e0dba6666e417eeee95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4833fba02e49059799236f9df3e3b7629725559910712e0dba6666e417eeee95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4833fba02e49059799236f9df3e3b7629725559910712e0dba6666e417eeee95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4833fba02e49059799236f9df3e3b7629725559910712e0dba6666e417eeee95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4833fba02e49059799236f9df3e3b7629725559910712e0dba6666e417eeee95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:29 np0005603435 podman[240577]: 2026-01-31 04:36:29.814136344 +0000 UTC m=+0.171784066 container init 02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:36:29 np0005603435 podman[240577]: 2026-01-31 04:36:29.830065608 +0000 UTC m=+0.187713280 container start 02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:36:29 np0005603435 podman[240577]: 2026-01-31 04:36:29.83409065 +0000 UTC m=+0.191738332 container attach 02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:36:30 np0005603435 musing_engelbart[240593]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:36:30 np0005603435 musing_engelbart[240593]: --> All data devices are unavailable
Jan 30 23:36:30 np0005603435 systemd[1]: libpod-02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6.scope: Deactivated successfully.
Jan 30 23:36:30 np0005603435 podman[240577]: 2026-01-31 04:36:30.371208985 +0000 UTC m=+0.728856637 container died 02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 30 23:36:30 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4833fba02e49059799236f9df3e3b7629725559910712e0dba6666e417eeee95-merged.mount: Deactivated successfully.
Jan 30 23:36:30 np0005603435 podman[240577]: 2026-01-31 04:36:30.429493443 +0000 UTC m=+0.787141095 container remove 02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:36:30 np0005603435 systemd[1]: libpod-conmon-02c90408f1db4e243d03d64e21c05e1ed056d907cfa5de23f9354b81acf9aee6.scope: Deactivated successfully.
Jan 30 23:36:30 np0005603435 podman[240614]: 2026-01-31 04:36:30.503184871 +0000 UTC m=+0.099638367 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 30 23:36:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:30 np0005603435 podman[240705]: 2026-01-31 04:36:30.871333694 +0000 UTC m=+0.052315957 container create 785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:36:30 np0005603435 systemd[1]: Started libpod-conmon-785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440.scope.
Jan 30 23:36:30 np0005603435 podman[240705]: 2026-01-31 04:36:30.841743754 +0000 UTC m=+0.022726037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:36:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:36:30 np0005603435 podman[240705]: 2026-01-31 04:36:30.956243586 +0000 UTC m=+0.137225849 container init 785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:36:30 np0005603435 podman[240705]: 2026-01-31 04:36:30.960274968 +0000 UTC m=+0.141257201 container start 785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_thompson, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:36:30 np0005603435 podman[240705]: 2026-01-31 04:36:30.96311881 +0000 UTC m=+0.144101043 container attach 785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_thompson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:36:30 np0005603435 systemd[1]: libpod-785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440.scope: Deactivated successfully.
Jan 30 23:36:30 np0005603435 distracted_thompson[240721]: 167 167
Jan 30 23:36:30 np0005603435 conmon[240721]: conmon 785276ada456c0880236 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440.scope/container/memory.events
Jan 30 23:36:30 np0005603435 podman[240705]: 2026-01-31 04:36:30.965158182 +0000 UTC m=+0.146140405 container died 785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:36:30 np0005603435 systemd[1]: var-lib-containers-storage-overlay-10550fd9338c3681fbe1f9adaa20eca681daaa09784fb57b15b77cbbadd6b2ac-merged.mount: Deactivated successfully.
Jan 30 23:36:31 np0005603435 podman[240705]: 2026-01-31 04:36:30.999606165 +0000 UTC m=+0.180588398 container remove 785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_thompson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:36:31 np0005603435 systemd[1]: libpod-conmon-785276ada456c08802364559a3c031c7ec040855de9de7828ef7310699d0c440.scope: Deactivated successfully.
Jan 30 23:36:31 np0005603435 podman[240744]: 2026-01-31 04:36:31.176821997 +0000 UTC m=+0.060070663 container create 24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_knuth, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:36:31 np0005603435 systemd[1]: Started libpod-conmon-24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a.scope.
Jan 30 23:36:31 np0005603435 podman[240744]: 2026-01-31 04:36:31.151050004 +0000 UTC m=+0.034298670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:36:31 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:36:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e45dd021a96eceb308466353594297bcfc6d2848fdf21715e8b2f7258a0824e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e45dd021a96eceb308466353594297bcfc6d2848fdf21715e8b2f7258a0824e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e45dd021a96eceb308466353594297bcfc6d2848fdf21715e8b2f7258a0824e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e45dd021a96eceb308466353594297bcfc6d2848fdf21715e8b2f7258a0824e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:31 np0005603435 podman[240744]: 2026-01-31 04:36:31.283795659 +0000 UTC m=+0.167044315 container init 24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_knuth, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:36:31 np0005603435 podman[240744]: 2026-01-31 04:36:31.297482626 +0000 UTC m=+0.180731282 container start 24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_knuth, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:36:31 np0005603435 podman[240744]: 2026-01-31 04:36:31.302409561 +0000 UTC m=+0.185658217 container attach 24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]: {
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:    "0": [
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:        {
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "devices": [
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "/dev/loop3"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            ],
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_name": "ceph_lv0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_size": "21470642176",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "name": "ceph_lv0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "tags": {
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cluster_name": "ceph",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.crush_device_class": "",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.encrypted": "0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.objectstore": "bluestore",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osd_id": "0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.type": "block",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.vdo": "0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.with_tpm": "0"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            },
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "type": "block",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "vg_name": "ceph_vg0"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:        }
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:    ],
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:    "1": [
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:        {
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "devices": [
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "/dev/loop4"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            ],
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_name": "ceph_lv1",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_size": "21470642176",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "name": "ceph_lv1",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "tags": {
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cluster_name": "ceph",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.crush_device_class": "",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.encrypted": "0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.objectstore": "bluestore",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osd_id": "1",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.type": "block",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.vdo": "0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.with_tpm": "0"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            },
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "type": "block",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "vg_name": "ceph_vg1"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:        }
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:    ],
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:    "2": [
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:        {
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "devices": [
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "/dev/loop5"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            ],
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_name": "ceph_lv2",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_size": "21470642176",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "name": "ceph_lv2",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "tags": {
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.cluster_name": "ceph",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.crush_device_class": "",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.encrypted": "0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.objectstore": "bluestore",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osd_id": "2",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.type": "block",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.vdo": "0",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:                "ceph.with_tpm": "0"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            },
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "type": "block",
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:            "vg_name": "ceph_vg2"
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:        }
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]:    ]
Jan 30 23:36:31 np0005603435 dreamy_knuth[240761]: }
Jan 30 23:36:31 np0005603435 systemd[1]: libpod-24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a.scope: Deactivated successfully.
Jan 30 23:36:31 np0005603435 podman[240744]: 2026-01-31 04:36:31.623991042 +0000 UTC m=+0.507239688 container died 24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:36:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3e45dd021a96eceb308466353594297bcfc6d2848fdf21715e8b2f7258a0824e-merged.mount: Deactivated successfully.
Jan 30 23:36:31 np0005603435 podman[240744]: 2026-01-31 04:36:31.67517586 +0000 UTC m=+0.558424516 container remove 24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:36:31 np0005603435 systemd[1]: libpod-conmon-24ade584479da706a51e9ebb742f6e9d220b69b366c32ee9bd6677875150136a.scope: Deactivated successfully.
Jan 30 23:36:32 np0005603435 podman[240844]: 2026-01-31 04:36:32.159907968 +0000 UTC m=+0.059984922 container create 352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:36:32 np0005603435 systemd[1]: Started libpod-conmon-352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed.scope.
Jan 30 23:36:32 np0005603435 podman[240844]: 2026-01-31 04:36:32.136030572 +0000 UTC m=+0.036107536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:36:32 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:36:32 np0005603435 podman[240844]: 2026-01-31 04:36:32.250620997 +0000 UTC m=+0.150697911 container init 352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 30 23:36:32 np0005603435 podman[240844]: 2026-01-31 04:36:32.256818904 +0000 UTC m=+0.156895818 container start 352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lamarr, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:36:32 np0005603435 podman[240844]: 2026-01-31 04:36:32.259660076 +0000 UTC m=+0.159736990 container attach 352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lamarr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:36:32 np0005603435 hardcore_lamarr[240860]: 167 167
Jan 30 23:36:32 np0005603435 systemd[1]: libpod-352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed.scope: Deactivated successfully.
Jan 30 23:36:32 np0005603435 podman[240844]: 2026-01-31 04:36:32.262757345 +0000 UTC m=+0.162834299 container died 352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lamarr, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:36:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay-57a63c364b4aa40878f743d3b3ab31b0a9db57ea0e09a81e5fc98a9fa5cd04f5-merged.mount: Deactivated successfully.
Jan 30 23:36:32 np0005603435 podman[240844]: 2026-01-31 04:36:32.29926916 +0000 UTC m=+0.199346074 container remove 352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_lamarr, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:36:32 np0005603435 systemd[1]: libpod-conmon-352c621d8b63fcbabb725f669068f10d2f916957004e8175778218491e0115ed.scope: Deactivated successfully.
Jan 30 23:36:32 np0005603435 podman[240885]: 2026-01-31 04:36:32.481598372 +0000 UTC m=+0.058724399 container create 72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:36:32 np0005603435 systemd[1]: Started libpod-conmon-72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a.scope.
Jan 30 23:36:32 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:36:32 np0005603435 podman[240885]: 2026-01-31 04:36:32.456337402 +0000 UTC m=+0.033463479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:36:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acefc5a077ac7531782cb0810bafac11546ccec51fcc10ea8ef5d4da463b052d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acefc5a077ac7531782cb0810bafac11546ccec51fcc10ea8ef5d4da463b052d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acefc5a077ac7531782cb0810bafac11546ccec51fcc10ea8ef5d4da463b052d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acefc5a077ac7531782cb0810bafac11546ccec51fcc10ea8ef5d4da463b052d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:36:32 np0005603435 podman[240885]: 2026-01-31 04:36:32.574677932 +0000 UTC m=+0.151803959 container init 72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:36:32 np0005603435 podman[240885]: 2026-01-31 04:36:32.588597855 +0000 UTC m=+0.165723882 container start 72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:36:32 np0005603435 podman[240885]: 2026-01-31 04:36:32.592032482 +0000 UTC m=+0.169158559 container attach 72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:36:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:33 np0005603435 lvm[240981]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:36:33 np0005603435 lvm[240981]: VG ceph_vg1 finished
Jan 30 23:36:33 np0005603435 lvm[240980]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:36:33 np0005603435 lvm[240980]: VG ceph_vg0 finished
Jan 30 23:36:33 np0005603435 lvm[240983]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:36:33 np0005603435 lvm[240983]: VG ceph_vg2 finished
Jan 30 23:36:33 np0005603435 suspicious_wright[240902]: {}
Jan 30 23:36:33 np0005603435 systemd[1]: libpod-72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a.scope: Deactivated successfully.
Jan 30 23:36:33 np0005603435 systemd[1]: libpod-72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a.scope: Consumed 1.041s CPU time.
Jan 30 23:36:33 np0005603435 podman[240885]: 2026-01-31 04:36:33.320644462 +0000 UTC m=+0.897770489 container died 72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:36:33 np0005603435 systemd[1]: var-lib-containers-storage-overlay-acefc5a077ac7531782cb0810bafac11546ccec51fcc10ea8ef5d4da463b052d-merged.mount: Deactivated successfully.
Jan 30 23:36:33 np0005603435 podman[240885]: 2026-01-31 04:36:33.368432933 +0000 UTC m=+0.945558960 container remove 72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:36:33 np0005603435 systemd[1]: libpod-conmon-72e3228a05a8f9e88de9acf87c0b73dfd09783ce60a821e7dd8ae9202715fb6a.scope: Deactivated successfully.
Jan 30 23:36:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:36:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:36:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:36:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:36:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:34 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:36:34 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:36:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:36:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:36:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:36:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:36:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:36:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:36:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:36:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:36:55.901 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:36:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:36:55.902 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:36:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:36:55.903 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:36:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:57 np0005603435 podman[241021]: 2026-01-31 04:36:57.166593895 +0000 UTC m=+0.129416993 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 30 23:36:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:36:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:01 np0005603435 podman[241048]: 2026-01-31 04:37:01.097214923 +0000 UTC m=+0.065754878 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.890 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.891 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.891 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.891 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.990 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.990 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.991 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.992 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.992 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.992 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.993 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.993 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:37:01 np0005603435 nova_compute[239938]: 2026-01-31 04:37:01.994 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.058 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.058 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.059 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.059 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.060 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:37:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:37:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755552924' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.610 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:37:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.828 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.830 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.831 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.831 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.931 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.932 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:37:02 np0005603435 nova_compute[239938]: 2026-01-31 04:37:02.949 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:37:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:37:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527632684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:37:03 np0005603435 nova_compute[239938]: 2026-01-31 04:37:03.499 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:37:03 np0005603435 nova_compute[239938]: 2026-01-31 04:37:03.504 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:37:03 np0005603435 nova_compute[239938]: 2026-01-31 04:37:03.519 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:37:03 np0005603435 nova_compute[239938]: 2026-01-31 04:37:03.542 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:37:03 np0005603435 nova_compute[239938]: 2026-01-31 04:37:03.543 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:37:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 30 23:37:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1884147971' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 30 23:37:04 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 30 23:37:04 np0005603435 ceph-mgr[75599]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 30 23:37:04 np0005603435 ceph-mgr[75599]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 30 23:37:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:37:06
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.log', 'vms', 'backups', '.mgr']
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:37:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:37:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:37:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.443514) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834229443562, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1332, "num_deletes": 507, "total_data_size": 1627275, "memory_usage": 1655040, "flush_reason": "Manual Compaction"}
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834229455540, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1601072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13701, "largest_seqno": 15032, "table_properties": {"data_size": 1595193, "index_size": 2699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 14893, "raw_average_key_size": 18, "raw_value_size": 1581479, "raw_average_value_size": 1916, "num_data_blocks": 124, "num_entries": 825, "num_filter_entries": 825, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769834124, "oldest_key_time": 1769834124, "file_creation_time": 1769834229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12087 microseconds, and 5405 cpu microseconds.
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.455599) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1601072 bytes OK
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.455630) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.457541) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.457563) EVENT_LOG_v1 {"time_micros": 1769834229457556, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.457588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1620204, prev total WAL file size 1620204, number of live WAL files 2.
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.458322) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1563KB)], [32(7670KB)]
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834229458362, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9455646, "oldest_snapshot_seqno": -1}
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3889 keys, 7522205 bytes, temperature: kUnknown
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834229509555, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7522205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7494314, "index_size": 17093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 95171, "raw_average_key_size": 24, "raw_value_size": 7422044, "raw_average_value_size": 1908, "num_data_blocks": 724, "num_entries": 3889, "num_filter_entries": 3889, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769834229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.509875) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7522205 bytes
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.511860) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.3 rd, 146.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.6) write-amplify(4.7) OK, records in: 4916, records dropped: 1027 output_compression: NoCompression
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.511899) EVENT_LOG_v1 {"time_micros": 1769834229511877, "job": 14, "event": "compaction_finished", "compaction_time_micros": 51294, "compaction_time_cpu_micros": 18755, "output_level": 6, "num_output_files": 1, "total_output_size": 7522205, "num_input_records": 4916, "num_output_records": 3889, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834229512287, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834229513646, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.458212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.513757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.513764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.513768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.513771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:37:09 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:37:09.513775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:37:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:37:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:37:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 30 23:37:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2386350219' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 30 23:37:20 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 30 23:37:20 np0005603435 ceph-mgr[75599]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 30 23:37:20 np0005603435 ceph-mgr[75599]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 30 23:37:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:28 np0005603435 podman[241111]: 2026-01-31 04:37:28.137314479 +0000 UTC m=+0.105416429 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 30 23:37:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:32 np0005603435 podman[241137]: 2026-01-31 04:37:32.09955316 +0000 UTC m=+0.066595029 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:37:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:37:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/112206345' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:37:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:37:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/112206345' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:37:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:37:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:34 np0005603435 podman[241300]: 2026-01-31 04:37:34.739217128 +0000 UTC m=+0.089628709 container create 11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:37:34 np0005603435 podman[241300]: 2026-01-31 04:37:34.674871966 +0000 UTC m=+0.025283597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:37:34 np0005603435 systemd[1]: Started libpod-conmon-11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce.scope.
Jan 30 23:37:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:34 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:37:34 np0005603435 podman[241300]: 2026-01-31 04:37:34.889769003 +0000 UTC m=+0.240180634 container init 11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 30 23:37:34 np0005603435 podman[241300]: 2026-01-31 04:37:34.898066258 +0000 UTC m=+0.248477829 container start 11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:37:34 np0005603435 stoic_wilson[241316]: 167 167
Jan 30 23:37:34 np0005603435 systemd[1]: libpod-11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce.scope: Deactivated successfully.
Jan 30 23:37:34 np0005603435 podman[241300]: 2026-01-31 04:37:34.941247066 +0000 UTC m=+0.291658697 container attach 11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:37:34 np0005603435 podman[241300]: 2026-01-31 04:37:34.941754039 +0000 UTC m=+0.292165610 container died 11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wilson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:37:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-280f3944c204a0e34648e4f38cda43bb7172ebc67ca86d369ca33c14480d925c-merged.mount: Deactivated successfully.
Jan 30 23:37:35 np0005603435 podman[241300]: 2026-01-31 04:37:35.103355067 +0000 UTC m=+0.453766638 container remove 11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:37:35 np0005603435 systemd[1]: libpod-conmon-11af1a4359a7679a38dc585d3dce306da447590fae3c67c8fd7f5dd4a3debfce.scope: Deactivated successfully.
Jan 30 23:37:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:37:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:37:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:37:35 np0005603435 podman[241342]: 2026-01-31 04:37:35.294954858 +0000 UTC m=+0.059174655 container create 4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:37:35 np0005603435 systemd[1]: Started libpod-conmon-4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484.scope.
Jan 30 23:37:35 np0005603435 podman[241342]: 2026-01-31 04:37:35.267108129 +0000 UTC m=+0.031327976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:37:35 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:37:35 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e664de5dc7e7200a4d15a4ea9eb614e894ec3ef7650305e8878ccd5102dfb5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:35 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e664de5dc7e7200a4d15a4ea9eb614e894ec3ef7650305e8878ccd5102dfb5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:35 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e664de5dc7e7200a4d15a4ea9eb614e894ec3ef7650305e8878ccd5102dfb5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:35 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e664de5dc7e7200a4d15a4ea9eb614e894ec3ef7650305e8878ccd5102dfb5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:35 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e664de5dc7e7200a4d15a4ea9eb614e894ec3ef7650305e8878ccd5102dfb5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:35 np0005603435 podman[241342]: 2026-01-31 04:37:35.436433738 +0000 UTC m=+0.200653595 container init 4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:37:35 np0005603435 podman[241342]: 2026-01-31 04:37:35.444248212 +0000 UTC m=+0.208468019 container start 4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:37:35 np0005603435 podman[241342]: 2026-01-31 04:37:35.458487474 +0000 UTC m=+0.222707281 container attach 4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:37:35 np0005603435 zen_heisenberg[241359]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:37:35 np0005603435 zen_heisenberg[241359]: --> All data devices are unavailable
Jan 30 23:37:35 np0005603435 systemd[1]: libpod-4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484.scope: Deactivated successfully.
Jan 30 23:37:35 np0005603435 podman[241342]: 2026-01-31 04:37:35.904044737 +0000 UTC m=+0.668264544 container died 4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:37:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7e664de5dc7e7200a4d15a4ea9eb614e894ec3ef7650305e8878ccd5102dfb5e-merged.mount: Deactivated successfully.
Jan 30 23:37:36 np0005603435 podman[241342]: 2026-01-31 04:37:36.142976298 +0000 UTC m=+0.907196095 container remove 4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 30 23:37:36 np0005603435 systemd[1]: libpod-conmon-4f38b67ef2ecb025d63e7a99d78dcdf9da25f2c341767e7ad4c3cd443d65f484.scope: Deactivated successfully.
Jan 30 23:37:36 np0005603435 podman[241456]: 2026-01-31 04:37:36.561441621 +0000 UTC m=+0.025139283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:37:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:37:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:37:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:37:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:37:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:37:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:37:36 np0005603435 podman[241456]: 2026-01-31 04:37:36.967718613 +0000 UTC m=+0.431416305 container create 085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:37:37 np0005603435 systemd[1]: Started libpod-conmon-085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f.scope.
Jan 30 23:37:37 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:37:37 np0005603435 podman[241456]: 2026-01-31 04:37:37.156211057 +0000 UTC m=+0.619908729 container init 085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:37:37 np0005603435 podman[241456]: 2026-01-31 04:37:37.165474876 +0000 UTC m=+0.629172548 container start 085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:37:37 np0005603435 admiring_hoover[241472]: 167 167
Jan 30 23:37:37 np0005603435 systemd[1]: libpod-085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f.scope: Deactivated successfully.
Jan 30 23:37:37 np0005603435 podman[241456]: 2026-01-31 04:37:37.218371264 +0000 UTC m=+0.682068936 container attach 085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hoover, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:37:37 np0005603435 podman[241456]: 2026-01-31 04:37:37.218906018 +0000 UTC m=+0.682603700 container died 085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hoover, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:37:37 np0005603435 systemd[1]: var-lib-containers-storage-overlay-2a5344536a357f5ea83eb3fa2ea4e056af4129d2347447767a92f1be7fdd1fde-merged.mount: Deactivated successfully.
Jan 30 23:37:37 np0005603435 podman[241456]: 2026-01-31 04:37:37.527426071 +0000 UTC m=+0.991123743 container remove 085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hoover, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:37:37 np0005603435 systemd[1]: libpod-conmon-085a4cfc5bb091cdafde30f0a53e3d14bdc5281bd1378f4a1ca202e7e7cbd97f.scope: Deactivated successfully.
Jan 30 23:37:37 np0005603435 podman[241498]: 2026-01-31 04:37:37.682117598 +0000 UTC m=+0.026340783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:37:37 np0005603435 podman[241498]: 2026-01-31 04:37:37.788329676 +0000 UTC m=+0.132552861 container create f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:37:37 np0005603435 systemd[1]: Started libpod-conmon-f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20.scope.
Jan 30 23:37:37 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:37:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf97e7df4be9274b57bb9cd6458c7922bbb599b5ea668d5510628761ce4d2379/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf97e7df4be9274b57bb9cd6458c7922bbb599b5ea668d5510628761ce4d2379/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf97e7df4be9274b57bb9cd6458c7922bbb599b5ea668d5510628761ce4d2379/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf97e7df4be9274b57bb9cd6458c7922bbb599b5ea668d5510628761ce4d2379/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:37 np0005603435 podman[241498]: 2026-01-31 04:37:37.949974015 +0000 UTC m=+0.294197230 container init f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:37:37 np0005603435 podman[241498]: 2026-01-31 04:37:37.958959678 +0000 UTC m=+0.303182863 container start f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:37:37 np0005603435 podman[241498]: 2026-01-31 04:37:37.979830094 +0000 UTC m=+0.324053269 container attach f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]: {
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:    "0": [
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:        {
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "devices": [
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "/dev/loop3"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            ],
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_name": "ceph_lv0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_size": "21470642176",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "name": "ceph_lv0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "tags": {
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cluster_name": "ceph",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.crush_device_class": "",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.encrypted": "0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.objectstore": "bluestore",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osd_id": "0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.type": "block",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.vdo": "0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.with_tpm": "0"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            },
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "type": "block",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "vg_name": "ceph_vg0"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:        }
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:    ],
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:    "1": [
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:        {
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "devices": [
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "/dev/loop4"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            ],
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_name": "ceph_lv1",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_size": "21470642176",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "name": "ceph_lv1",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "tags": {
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cluster_name": "ceph",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.crush_device_class": "",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.encrypted": "0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.objectstore": "bluestore",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osd_id": "1",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.type": "block",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.vdo": "0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.with_tpm": "0"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            },
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "type": "block",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "vg_name": "ceph_vg1"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:        }
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:    ],
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:    "2": [
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:        {
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "devices": [
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "/dev/loop5"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            ],
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_name": "ceph_lv2",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_size": "21470642176",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "name": "ceph_lv2",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "tags": {
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.cluster_name": "ceph",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.crush_device_class": "",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.encrypted": "0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.objectstore": "bluestore",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osd_id": "2",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.type": "block",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.vdo": "0",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:                "ceph.with_tpm": "0"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            },
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "type": "block",
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:            "vg_name": "ceph_vg2"
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:        }
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]:    ]
Jan 30 23:37:38 np0005603435 dazzling_hypatia[241515]: }
Jan 30 23:37:38 np0005603435 systemd[1]: libpod-f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20.scope: Deactivated successfully.
Jan 30 23:37:38 np0005603435 podman[241498]: 2026-01-31 04:37:38.252469449 +0000 UTC m=+0.596692634 container died f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 30 23:37:38 np0005603435 systemd[1]: var-lib-containers-storage-overlay-cf97e7df4be9274b57bb9cd6458c7922bbb599b5ea668d5510628761ce4d2379-merged.mount: Deactivated successfully.
Jan 30 23:37:38 np0005603435 podman[241498]: 2026-01-31 04:37:38.492391435 +0000 UTC m=+0.836614620 container remove f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:37:38 np0005603435 systemd[1]: libpod-conmon-f870721bba07742f687e8cfac08e465802ef8c799fea390f1b7e304346119e20.scope: Deactivated successfully.
Jan 30 23:37:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:39 np0005603435 podman[241598]: 2026-01-31 04:37:39.053277442 +0000 UTC m=+0.112066074 container create 6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:37:39 np0005603435 podman[241598]: 2026-01-31 04:37:38.964380652 +0000 UTC m=+0.023169354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:37:39 np0005603435 systemd[1]: Started libpod-conmon-6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489.scope.
Jan 30 23:37:39 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:37:39 np0005603435 podman[241598]: 2026-01-31 04:37:39.266016255 +0000 UTC m=+0.324804937 container init 6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:37:39 np0005603435 podman[241598]: 2026-01-31 04:37:39.275339966 +0000 UTC m=+0.334128628 container start 6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_payne, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:37:39 np0005603435 nostalgic_payne[241614]: 167 167
Jan 30 23:37:39 np0005603435 systemd[1]: libpod-6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489.scope: Deactivated successfully.
Jan 30 23:37:39 np0005603435 podman[241598]: 2026-01-31 04:37:39.287587899 +0000 UTC m=+0.346376561 container attach 6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_payne, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:37:39 np0005603435 podman[241598]: 2026-01-31 04:37:39.288143053 +0000 UTC m=+0.346931715 container died 6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 30 23:37:39 np0005603435 systemd[1]: var-lib-containers-storage-overlay-acffdf96d99d633db1141d520d3374c91544e0060a5f6d03622970a732986bc5-merged.mount: Deactivated successfully.
Jan 30 23:37:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:39 np0005603435 podman[241598]: 2026-01-31 04:37:39.502093265 +0000 UTC m=+0.560881927 container remove 6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:37:39 np0005603435 systemd[1]: libpod-conmon-6d42ee7c392536d95e948c184bb0f40e449b5a2bc9fc3e798792114140525489.scope: Deactivated successfully.
Jan 30 23:37:39 np0005603435 podman[241640]: 2026-01-31 04:37:39.735518371 +0000 UTC m=+0.097180246 container create eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:37:39 np0005603435 podman[241640]: 2026-01-31 04:37:39.671487556 +0000 UTC m=+0.033149491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:37:39 np0005603435 systemd[1]: Started libpod-conmon-eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d.scope.
Jan 30 23:37:39 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:37:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832a468aca45c7b97c385219c01055ded8563b1854aacb66b657ce7a4b92d565/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832a468aca45c7b97c385219c01055ded8563b1854aacb66b657ce7a4b92d565/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832a468aca45c7b97c385219c01055ded8563b1854aacb66b657ce7a4b92d565/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832a468aca45c7b97c385219c01055ded8563b1854aacb66b657ce7a4b92d565/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:37:39 np0005603435 podman[241640]: 2026-01-31 04:37:39.868694495 +0000 UTC m=+0.230356380 container init eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_heyrovsky, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:37:39 np0005603435 podman[241640]: 2026-01-31 04:37:39.878353915 +0000 UTC m=+0.240015790 container start eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:37:39 np0005603435 podman[241640]: 2026-01-31 04:37:39.905310771 +0000 UTC m=+0.266972716 container attach eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 30 23:37:40 np0005603435 lvm[241734]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:37:40 np0005603435 lvm[241734]: VG ceph_vg0 finished
Jan 30 23:37:40 np0005603435 lvm[241735]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:37:40 np0005603435 lvm[241735]: VG ceph_vg1 finished
Jan 30 23:37:40 np0005603435 lvm[241737]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:37:40 np0005603435 lvm[241737]: VG ceph_vg2 finished
Jan 30 23:37:40 np0005603435 dreamy_heyrovsky[241656]: {}
Jan 30 23:37:40 np0005603435 systemd[1]: libpod-eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d.scope: Deactivated successfully.
Jan 30 23:37:40 np0005603435 systemd[1]: libpod-eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d.scope: Consumed 1.067s CPU time.
Jan 30 23:37:40 np0005603435 podman[241640]: 2026-01-31 04:37:40.654788624 +0000 UTC m=+1.016450549 container died eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:37:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay-832a468aca45c7b97c385219c01055ded8563b1854aacb66b657ce7a4b92d565-merged.mount: Deactivated successfully.
Jan 30 23:37:40 np0005603435 podman[241640]: 2026-01-31 04:37:40.843209426 +0000 UTC m=+1.204871301 container remove eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_heyrovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:37:40 np0005603435 systemd[1]: libpod-conmon-eb72b36e50f1f4442ca24131302c23fff80518b77a3f587a41091a25ba93f40d.scope: Deactivated successfully.
Jan 30 23:37:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:37:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:37:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:37:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:37:42 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:37:42 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:37:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:37:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 3404 writes, 15K keys, 3404 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3404 writes, 3404 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1290 writes, 5864 keys, 1290 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s#012Interval WAL: 1290 writes, 1290 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.1      0.15              0.05         7    0.022       0      0       0.0       0.0#012  L6      1/0    7.17 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    165.4    136.6      0.31              0.13         6    0.052     24K   3201       0.0       0.0#012 Sum      1/0    7.17 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    110.4    125.8      0.46              0.18        13    0.036     24K   3201       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    120.5    121.7      0.29              0.11         8    0.036     17K   2468       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    165.4    136.6      0.31              0.13         6    0.052     24K   3201       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.8      0.13              0.05         6    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.1      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5573585118d0#2 capacity: 308.00 MB usage: 1.96 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(105,1.73 MB,0.562494%) FilterBlock(14,77.98 KB,0.0247262%) IndexBlock(14,153.17 KB,0.0485656%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 30 23:37:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:37:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:37:55.903 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:37:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:37:55.904 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:37:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:37:55.904 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:37:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:37:59 np0005603435 podman[241777]: 2026-01-31 04:37:59.185802903 +0000 UTC m=+0.148270810 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 30 23:37:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:03 np0005603435 podman[241803]: 2026-01-31 04:38:03.103424658 +0000 UTC m=+0.065158973 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.535 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.536 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.562 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.562 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.562 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.575 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.575 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.577 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.577 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.578 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.578 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.919 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.920 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.920 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.921 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:38:03 np0005603435 nova_compute[239938]: 2026-01-31 04:38:03.921 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:38:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:38:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221363792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:38:04 np0005603435 nova_compute[239938]: 2026-01-31 04:38:04.440 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:38:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:04 np0005603435 nova_compute[239938]: 2026-01-31 04:38:04.632 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:38:04 np0005603435 nova_compute[239938]: 2026-01-31 04:38:04.634 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5123MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:38:04 np0005603435 nova_compute[239938]: 2026-01-31 04:38:04.635 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:38:04 np0005603435 nova_compute[239938]: 2026-01-31 04:38:04.635 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:38:04 np0005603435 nova_compute[239938]: 2026-01-31 04:38:04.701 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:38:04 np0005603435 nova_compute[239938]: 2026-01-31 04:38:04.701 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:38:04 np0005603435 nova_compute[239938]: 2026-01-31 04:38:04.717 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:38:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:38:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602938337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:38:05 np0005603435 nova_compute[239938]: 2026-01-31 04:38:05.255 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:38:05 np0005603435 nova_compute[239938]: 2026-01-31 04:38:05.261 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:38:05 np0005603435 nova_compute[239938]: 2026-01-31 04:38:05.282 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:38:05 np0005603435 nova_compute[239938]: 2026-01-31 04:38:05.284 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:38:05 np0005603435 nova_compute[239938]: 2026-01-31 04:38:05.285 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:38:06
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'images']
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:38:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:38:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:38:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:38:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:38:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:30 np0005603435 podman[241866]: 2026-01-31 04:38:30.142866302 +0000 UTC m=+0.106143197 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller)
Jan 30 23:38:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:38:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3277934174' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:38:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:38:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3277934174' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:38:34 np0005603435 podman[241893]: 2026-01-31 04:38:34.124211843 +0000 UTC m=+0.089873884 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 30 23:38:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:38:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:38:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:38:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:38:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:38:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:38:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:38:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:38:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:38:41.862 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:38:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:38:41.863 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:38:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:38:41.865 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:38:42 np0005603435 podman[242055]: 2026-01-31 04:38:42.110325268 +0000 UTC m=+0.058153220 container create 94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:38:42 np0005603435 systemd[1]: Started libpod-conmon-94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb.scope.
Jan 30 23:38:42 np0005603435 podman[242055]: 2026-01-31 04:38:42.084642832 +0000 UTC m=+0.032470844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:38:42 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:38:42 np0005603435 podman[242055]: 2026-01-31 04:38:42.196276964 +0000 UTC m=+0.144104936 container init 94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:38:42 np0005603435 podman[242055]: 2026-01-31 04:38:42.204149309 +0000 UTC m=+0.151977261 container start 94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:38:42 np0005603435 podman[242055]: 2026-01-31 04:38:42.209014839 +0000 UTC m=+0.156843071 container attach 94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:38:42 np0005603435 quizzical_bell[242071]: 167 167
Jan 30 23:38:42 np0005603435 systemd[1]: libpod-94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb.scope: Deactivated successfully.
Jan 30 23:38:42 np0005603435 conmon[242071]: conmon 94085336608d4e01c218 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb.scope/container/memory.events
Jan 30 23:38:42 np0005603435 podman[242055]: 2026-01-31 04:38:42.21187718 +0000 UTC m=+0.159705142 container died 94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:38:42 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b11860c299036ae3ae4add2d7470c3d7a2fec719036e2c680c48d4700670bb6e-merged.mount: Deactivated successfully.
Jan 30 23:38:42 np0005603435 podman[242055]: 2026-01-31 04:38:42.261215011 +0000 UTC m=+0.209042973 container remove 94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:38:42 np0005603435 systemd[1]: libpod-conmon-94085336608d4e01c218d9202c935ca568f17dd485291bdfbcd8a42faefc0ecb.scope: Deactivated successfully.
Jan 30 23:38:42 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:38:42 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:38:42 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:38:42 np0005603435 podman[242095]: 2026-01-31 04:38:42.430334325 +0000 UTC m=+0.055436583 container create e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_nash, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 30 23:38:42 np0005603435 systemd[1]: Started libpod-conmon-e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d.scope.
Jan 30 23:38:42 np0005603435 podman[242095]: 2026-01-31 04:38:42.40790906 +0000 UTC m=+0.033011348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:38:42 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:38:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fb4d2b8580e924aa8954661935568b6569bc4d617bbc834c2df2c2ff35e9d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fb4d2b8580e924aa8954661935568b6569bc4d617bbc834c2df2c2ff35e9d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fb4d2b8580e924aa8954661935568b6569bc4d617bbc834c2df2c2ff35e9d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fb4d2b8580e924aa8954661935568b6569bc4d617bbc834c2df2c2ff35e9d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1fb4d2b8580e924aa8954661935568b6569bc4d617bbc834c2df2c2ff35e9d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:42 np0005603435 podman[242095]: 2026-01-31 04:38:42.530870302 +0000 UTC m=+0.155972560 container init e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:38:42 np0005603435 podman[242095]: 2026-01-31 04:38:42.544295385 +0000 UTC m=+0.169397663 container start e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:38:42 np0005603435 podman[242095]: 2026-01-31 04:38:42.548336585 +0000 UTC m=+0.173438833 container attach e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:38:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:42 np0005603435 confident_nash[242112]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:38:42 np0005603435 confident_nash[242112]: --> All data devices are unavailable
Jan 30 23:38:42 np0005603435 systemd[1]: libpod-e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d.scope: Deactivated successfully.
Jan 30 23:38:42 np0005603435 podman[242095]: 2026-01-31 04:38:42.999199889 +0000 UTC m=+0.624302147 container died e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:38:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d1fb4d2b8580e924aa8954661935568b6569bc4d617bbc834c2df2c2ff35e9d8-merged.mount: Deactivated successfully.
Jan 30 23:38:43 np0005603435 podman[242095]: 2026-01-31 04:38:43.055599985 +0000 UTC m=+0.680702213 container remove e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:38:43 np0005603435 systemd[1]: libpod-conmon-e3137b78fa6ab932fe3076729e783fa438b6fb3e5cb545def664d9aaf98e3d4d.scope: Deactivated successfully.
Jan 30 23:38:43 np0005603435 podman[242204]: 2026-01-31 04:38:43.538641846 +0000 UTC m=+0.053446843 container create 848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:38:43 np0005603435 systemd[1]: Started libpod-conmon-848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16.scope.
Jan 30 23:38:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:38:43 np0005603435 podman[242204]: 2026-01-31 04:38:43.512821897 +0000 UTC m=+0.027626964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:38:43 np0005603435 podman[242204]: 2026-01-31 04:38:43.615150629 +0000 UTC m=+0.129955686 container init 848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:38:43 np0005603435 podman[242204]: 2026-01-31 04:38:43.623459635 +0000 UTC m=+0.138264642 container start 848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 30 23:38:43 np0005603435 podman[242204]: 2026-01-31 04:38:43.62731774 +0000 UTC m=+0.142122797 container attach 848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:38:43 np0005603435 jolly_dewdney[242220]: 167 167
Jan 30 23:38:43 np0005603435 systemd[1]: libpod-848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16.scope: Deactivated successfully.
Jan 30 23:38:43 np0005603435 podman[242204]: 2026-01-31 04:38:43.629611167 +0000 UTC m=+0.144416164 container died 848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dewdney, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 30 23:38:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7521fe93bc5af33e1d65df5fd96b5afd8fd5c8cadf214b8dc7665f244aa10104-merged.mount: Deactivated successfully.
Jan 30 23:38:43 np0005603435 podman[242204]: 2026-01-31 04:38:43.676661061 +0000 UTC m=+0.191466068 container remove 848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 30 23:38:43 np0005603435 systemd[1]: libpod-conmon-848afc27f3dc01dc8866d78a16ddaea989ed054b7d657234d5e312eda029fb16.scope: Deactivated successfully.
Jan 30 23:38:43 np0005603435 podman[242245]: 2026-01-31 04:38:43.859195807 +0000 UTC m=+0.047998659 container create fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:38:43 np0005603435 systemd[1]: Started libpod-conmon-fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56.scope.
Jan 30 23:38:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:38:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e972608eed7c1388227cafe628f413acff3cabf8b326c89eef1c456ede8ef5e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e972608eed7c1388227cafe628f413acff3cabf8b326c89eef1c456ede8ef5e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e972608eed7c1388227cafe628f413acff3cabf8b326c89eef1c456ede8ef5e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e972608eed7c1388227cafe628f413acff3cabf8b326c89eef1c456ede8ef5e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:43 np0005603435 podman[242245]: 2026-01-31 04:38:43.841608732 +0000 UTC m=+0.030411624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:38:43 np0005603435 podman[242245]: 2026-01-31 04:38:43.945530932 +0000 UTC m=+0.134333784 container init fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:38:43 np0005603435 podman[242245]: 2026-01-31 04:38:43.953833807 +0000 UTC m=+0.142636679 container start fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 30 23:38:43 np0005603435 podman[242245]: 2026-01-31 04:38:43.957130089 +0000 UTC m=+0.145932931 container attach fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brahmagupta, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]: {
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:    "0": [
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:        {
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "devices": [
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "/dev/loop3"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            ],
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_name": "ceph_lv0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_size": "21470642176",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "name": "ceph_lv0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "tags": {
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cluster_name": "ceph",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.crush_device_class": "",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.encrypted": "0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.objectstore": "bluestore",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osd_id": "0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.type": "block",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.vdo": "0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.with_tpm": "0"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            },
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "type": "block",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "vg_name": "ceph_vg0"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:        }
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:    ],
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:    "1": [
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:        {
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "devices": [
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "/dev/loop4"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            ],
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_name": "ceph_lv1",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_size": "21470642176",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "name": "ceph_lv1",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "tags": {
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cluster_name": "ceph",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.crush_device_class": "",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.encrypted": "0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.objectstore": "bluestore",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osd_id": "1",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.type": "block",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.vdo": "0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.with_tpm": "0"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            },
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "type": "block",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "vg_name": "ceph_vg1"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:        }
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:    ],
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:    "2": [
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:        {
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "devices": [
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "/dev/loop5"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            ],
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_name": "ceph_lv2",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_size": "21470642176",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "name": "ceph_lv2",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "tags": {
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.cluster_name": "ceph",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.crush_device_class": "",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.encrypted": "0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.objectstore": "bluestore",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osd_id": "2",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.type": "block",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.vdo": "0",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:                "ceph.with_tpm": "0"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            },
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "type": "block",
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:            "vg_name": "ceph_vg2"
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:        }
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]:    ]
Jan 30 23:38:44 np0005603435 gracious_brahmagupta[242262]: }
Jan 30 23:38:44 np0005603435 systemd[1]: libpod-fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56.scope: Deactivated successfully.
Jan 30 23:38:44 np0005603435 podman[242245]: 2026-01-31 04:38:44.23929294 +0000 UTC m=+0.428095822 container died fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:38:44 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e972608eed7c1388227cafe628f413acff3cabf8b326c89eef1c456ede8ef5e7-merged.mount: Deactivated successfully.
Jan 30 23:38:44 np0005603435 podman[242245]: 2026-01-31 04:38:44.321684879 +0000 UTC m=+0.510487761 container remove fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:38:44 np0005603435 systemd[1]: libpod-conmon-fd92de4e798aaf661ef1e6e1730a69da0902b1a6182e881bfa0f41886624fe56.scope: Deactivated successfully.
Jan 30 23:38:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:44 np0005603435 podman[242345]: 2026-01-31 04:38:44.734448451 +0000 UTC m=+0.033171122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:38:44 np0005603435 podman[242345]: 2026-01-31 04:38:44.888140743 +0000 UTC m=+0.186863324 container create 4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:38:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:45 np0005603435 systemd[1]: Started libpod-conmon-4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b.scope.
Jan 30 23:38:45 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:38:45 np0005603435 podman[242345]: 2026-01-31 04:38:45.292686252 +0000 UTC m=+0.591408913 container init 4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclaren, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:38:45 np0005603435 podman[242345]: 2026-01-31 04:38:45.298418134 +0000 UTC m=+0.597140715 container start 4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclaren, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:38:45 np0005603435 systemd[1]: libpod-4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b.scope: Deactivated successfully.
Jan 30 23:38:45 np0005603435 busy_mclaren[242362]: 167 167
Jan 30 23:38:45 np0005603435 podman[242345]: 2026-01-31 04:38:45.446347214 +0000 UTC m=+0.745069815 container attach 4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:38:45 np0005603435 podman[242345]: 2026-01-31 04:38:45.44737936 +0000 UTC m=+0.746101941 container died 4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclaren, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:38:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5d4ae36e7626ec3d8375703b188ab98df79947a6b46a0c97ad8a92bc78793c04-merged.mount: Deactivated successfully.
Jan 30 23:38:46 np0005603435 podman[242345]: 2026-01-31 04:38:46.919305027 +0000 UTC m=+2.218027608 container remove 4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclaren, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:38:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:46 np0005603435 systemd[1]: libpod-conmon-4beff938d8e9855b1a71e58f0c2a608b47601f753ccac96e60b1e74b7ea15c9b.scope: Deactivated successfully.
Jan 30 23:38:47 np0005603435 podman[242386]: 2026-01-31 04:38:47.036712552 +0000 UTC m=+0.031794168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:38:47 np0005603435 podman[242386]: 2026-01-31 04:38:47.206179605 +0000 UTC m=+0.201261251 container create 555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:38:47 np0005603435 systemd[1]: Started libpod-conmon-555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632.scope.
Jan 30 23:38:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:38:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b533492e205a83fe57810153f676b52d7e2899f66237d10c87d2b11a70315f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b533492e205a83fe57810153f676b52d7e2899f66237d10c87d2b11a70315f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b533492e205a83fe57810153f676b52d7e2899f66237d10c87d2b11a70315f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b533492e205a83fe57810153f676b52d7e2899f66237d10c87d2b11a70315f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:38:47 np0005603435 podman[242386]: 2026-01-31 04:38:47.576782423 +0000 UTC m=+0.571864079 container init 555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:38:47 np0005603435 podman[242386]: 2026-01-31 04:38:47.582557886 +0000 UTC m=+0.577639552 container start 555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cerf, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:38:47 np0005603435 podman[242386]: 2026-01-31 04:38:47.859214171 +0000 UTC m=+0.854295797 container attach 555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:38:48 np0005603435 lvm[242478]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:38:48 np0005603435 lvm[242478]: VG ceph_vg0 finished
Jan 30 23:38:48 np0005603435 lvm[242481]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:38:48 np0005603435 lvm[242481]: VG ceph_vg1 finished
Jan 30 23:38:48 np0005603435 lvm[242483]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:38:48 np0005603435 lvm[242483]: VG ceph_vg2 finished
Jan 30 23:38:48 np0005603435 tender_cerf[242402]: {}
Jan 30 23:38:48 np0005603435 systemd[1]: libpod-555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632.scope: Deactivated successfully.
Jan 30 23:38:48 np0005603435 podman[242386]: 2026-01-31 04:38:48.277346096 +0000 UTC m=+1.272427762 container died 555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cerf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:38:48 np0005603435 systemd[1]: libpod-555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632.scope: Consumed 1.036s CPU time.
Jan 30 23:38:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:49 np0005603435 systemd[1]: var-lib-containers-storage-overlay-97b533492e205a83fe57810153f676b52d7e2899f66237d10c87d2b11a70315f-merged.mount: Deactivated successfully.
Jan 30 23:38:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:50 np0005603435 podman[242386]: 2026-01-31 04:38:50.469301937 +0000 UTC m=+3.464383563 container remove 555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:38:50 np0005603435 systemd[1]: libpod-conmon-555d2f1c72f30738cab5dad81299ea0ea4baa001e265213b1bf946d139b57632.scope: Deactivated successfully.
Jan 30 23:38:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:38:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:38:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:38:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:38:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:38:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:38:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:38:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5893 writes, 25K keys, 5893 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5893 writes, 1020 syncs, 5.78 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56375c2eb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56375c2eb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 30 23:38:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:38:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:38:55.904 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:38:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:38:55.905 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:38:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:38:55.906 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:38:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:38:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:38:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.7 total, 600.0 interval#012Cumulative writes: 8442 writes, 34K keys, 8442 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8442 writes, 1711 syncs, 4.93 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.29              0.00         1    0.289       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.7 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b8190cda30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.7 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b8190cda30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.7 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 30 23:38:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:01 np0005603435 podman[242525]: 2026-01-31 04:39:01.167153122 +0000 UTC m=+0.130031958 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 30 23:39:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.280 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.280 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.280 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.280 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.309 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.309 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.309 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.310 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.310 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.310 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.310 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.347 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.347 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.347 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.347 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.347 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:39:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:39:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3989646248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:39:04 np0005603435 nova_compute[239938]: 2026-01-31 04:39:04.917 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:39:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:39:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.1 total, 600.0 interval#012Cumulative writes: 5812 writes, 24K keys, 5812 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5812 writes, 954 syncs, 6.09 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.11              0.00         1    0.106       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611172278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611172278d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 30 23:39:05 np0005603435 podman[242574]: 2026-01-31 04:39:05.102938547 +0000 UTC m=+0.070535156 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.124 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.127 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.127 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.128 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.217 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.218 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.235 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:39:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:39:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991446442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.755 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.761 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.781 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.784 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:39:05 np0005603435 nova_compute[239938]: 2026-01-31 04:39:05.785 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:39:06
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'backups']
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:39:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:39:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:39:07 np0005603435 nova_compute[239938]: 2026-01-31 04:39:07.362 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:07 np0005603435 nova_compute[239938]: 2026-01-31 04:39:07.363 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:39:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:39:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:39:17 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] Check health
Jan 30 23:39:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:22 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:24 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:26 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:28 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:30 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:32 np0005603435 podman[242617]: 2026-01-31 04:39:32.171173363 +0000 UTC m=+0.133118534 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 30 23:39:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:39:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3748221701' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:39:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:39:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3748221701' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:39:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.403656) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834373403740, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1376, "num_deletes": 251, "total_data_size": 2159038, "memory_usage": 2202880, "flush_reason": "Manual Compaction"}
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834373419038, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2127470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15033, "largest_seqno": 16408, "table_properties": {"data_size": 2121062, "index_size": 3607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13337, "raw_average_key_size": 19, "raw_value_size": 2108189, "raw_average_value_size": 3104, "num_data_blocks": 165, "num_entries": 679, "num_filter_entries": 679, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769834230, "oldest_key_time": 1769834230, "file_creation_time": 1769834373, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 15460 microseconds, and 8742 cpu microseconds.
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.419116) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2127470 bytes OK
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.419147) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.421073) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.421100) EVENT_LOG_v1 {"time_micros": 1769834373421091, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.421347) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2152923, prev total WAL file size 2152923, number of live WAL files 2.
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.422387) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2077KB)], [35(7345KB)]
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834373422473, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9649675, "oldest_snapshot_seqno": -1}
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4054 keys, 7853438 bytes, temperature: kUnknown
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834373478291, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7853438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7824211, "index_size": 17981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99081, "raw_average_key_size": 24, "raw_value_size": 7748820, "raw_average_value_size": 1911, "num_data_blocks": 760, "num_entries": 4054, "num_filter_entries": 4054, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769834373, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.478617) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7853438 bytes
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.480433) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.6 rd, 140.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.2 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.2) write-amplify(3.7) OK, records in: 4568, records dropped: 514 output_compression: NoCompression
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.480465) EVENT_LOG_v1 {"time_micros": 1769834373480452, "job": 16, "event": "compaction_finished", "compaction_time_micros": 55913, "compaction_time_cpu_micros": 22569, "output_level": 6, "num_output_files": 1, "total_output_size": 7853438, "num_input_records": 4568, "num_output_records": 4054, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834373480921, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834373482427, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.422191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.482584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.482596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.482599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.482604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:39:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:39:33.482608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:39:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:36 np0005603435 podman[242643]: 2026-01-31 04:39:36.110598198 +0000 UTC m=+0.070242849 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:39:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:39:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:39:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:39:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:39:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:39:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:39:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:39:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:39:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:39:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:39:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:53 np0005603435 podman[242876]: 2026-01-31 04:39:53.11179829 +0000 UTC m=+0.062232570 container create 509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_herschel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:39:53 np0005603435 systemd[1]: Started libpod-conmon-509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e.scope.
Jan 30 23:39:53 np0005603435 podman[242876]: 2026-01-31 04:39:53.08389205 +0000 UTC m=+0.034326370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:39:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:39:53 np0005603435 podman[242876]: 2026-01-31 04:39:53.213577578 +0000 UTC m=+0.164011868 container init 509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:39:53 np0005603435 podman[242876]: 2026-01-31 04:39:53.222785426 +0000 UTC m=+0.173219686 container start 509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:39:53 np0005603435 podman[242876]: 2026-01-31 04:39:53.226419646 +0000 UTC m=+0.176853906 container attach 509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:39:53 np0005603435 condescending_herschel[242892]: 167 167
Jan 30 23:39:53 np0005603435 systemd[1]: libpod-509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e.scope: Deactivated successfully.
Jan 30 23:39:53 np0005603435 podman[242876]: 2026-01-31 04:39:53.230572399 +0000 UTC m=+0.181006719 container died 509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_herschel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:39:53 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f1767c3726c565840d85a01876d79b4f1742f9943f3d97c34119d7edab201084-merged.mount: Deactivated successfully.
Jan 30 23:39:53 np0005603435 podman[242876]: 2026-01-31 04:39:53.281495599 +0000 UTC m=+0.231929879 container remove 509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:39:53 np0005603435 systemd[1]: libpod-conmon-509c5eb90e75ce9be83248bd7c052b58e3099745d2070dc22766e9fb01672e8e.scope: Deactivated successfully.
Jan 30 23:39:53 np0005603435 podman[242917]: 2026-01-31 04:39:53.475391476 +0000 UTC m=+0.060294883 container create e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pascal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 30 23:39:53 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:39:53 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:53 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:39:53 np0005603435 systemd[1]: Started libpod-conmon-e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626.scope.
Jan 30 23:39:53 np0005603435 podman[242917]: 2026-01-31 04:39:53.447852815 +0000 UTC m=+0.032756282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:39:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:39:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab8fd4b322ab1be3cfa224a14c733368aabd41d9a791f9b977a61063a2c4f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab8fd4b322ab1be3cfa224a14c733368aabd41d9a791f9b977a61063a2c4f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab8fd4b322ab1be3cfa224a14c733368aabd41d9a791f9b977a61063a2c4f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab8fd4b322ab1be3cfa224a14c733368aabd41d9a791f9b977a61063a2c4f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab8fd4b322ab1be3cfa224a14c733368aabd41d9a791f9b977a61063a2c4f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:53 np0005603435 podman[242917]: 2026-01-31 04:39:53.579591244 +0000 UTC m=+0.164494711 container init e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pascal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:39:53 np0005603435 podman[242917]: 2026-01-31 04:39:53.59318863 +0000 UTC m=+0.178092037 container start e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pascal, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 30 23:39:53 np0005603435 podman[242917]: 2026-01-31 04:39:53.597260501 +0000 UTC m=+0.182163918 container attach e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:39:54 np0005603435 pensive_pascal[242933]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:39:54 np0005603435 pensive_pascal[242933]: --> All data devices are unavailable
Jan 30 23:39:54 np0005603435 systemd[1]: libpod-e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626.scope: Deactivated successfully.
Jan 30 23:39:54 np0005603435 podman[242917]: 2026-01-31 04:39:54.061206449 +0000 UTC m=+0.646109856 container died e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pascal, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:39:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b0ab8fd4b322ab1be3cfa224a14c733368aabd41d9a791f9b977a61063a2c4f5-merged.mount: Deactivated successfully.
Jan 30 23:39:54 np0005603435 podman[242917]: 2026-01-31 04:39:54.119729617 +0000 UTC m=+0.704633024 container remove e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:39:54 np0005603435 systemd[1]: libpod-conmon-e967bcb74a08df9e7e74ddf424f9f5a110a4765575c484bee6f7eaecd29b3626.scope: Deactivated successfully.
Jan 30 23:39:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:39:54 np0005603435 podman[243025]: 2026-01-31 04:39:54.603195529 +0000 UTC m=+0.052716155 container create 016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 30 23:39:54 np0005603435 systemd[1]: Started libpod-conmon-016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7.scope.
Jan 30 23:39:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:39:54 np0005603435 podman[243025]: 2026-01-31 04:39:54.58221442 +0000 UTC m=+0.031735036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:39:54 np0005603435 podman[243025]: 2026-01-31 04:39:54.690316384 +0000 UTC m=+0.139837010 container init 016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_khorana, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 30 23:39:54 np0005603435 podman[243025]: 2026-01-31 04:39:54.696947778 +0000 UTC m=+0.146468404 container start 016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:39:54 np0005603435 podman[243025]: 2026-01-31 04:39:54.700960678 +0000 UTC m=+0.150481374 container attach 016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_khorana, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:39:54 np0005603435 tender_khorana[243042]: 167 167
Jan 30 23:39:54 np0005603435 systemd[1]: libpod-016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7.scope: Deactivated successfully.
Jan 30 23:39:54 np0005603435 podman[243025]: 2026-01-31 04:39:54.701815909 +0000 UTC m=+0.151336535 container died 016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:39:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-124bf29b6d8d325d7c38b49c523adbb640ff3e9a6f047d22df43280ce24f2359-merged.mount: Deactivated successfully.
Jan 30 23:39:54 np0005603435 podman[243025]: 2026-01-31 04:39:54.749759615 +0000 UTC m=+0.199280241 container remove 016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_khorana, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:39:54 np0005603435 systemd[1]: libpod-conmon-016bb3b24874a85aab4f94d3341493e8253dd5a8a7bef9fac3d6b6289fc56ec7.scope: Deactivated successfully.
Jan 30 23:39:54 np0005603435 podman[243066]: 2026-01-31 04:39:54.92537346 +0000 UTC m=+0.053887934 container create d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:39:54 np0005603435 systemd[1]: Started libpod-conmon-d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8.scope.
Jan 30 23:39:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:54 np0005603435 podman[243066]: 2026-01-31 04:39:54.902547685 +0000 UTC m=+0.031062229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:39:55 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:39:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44169db93f4cd9f2d1d26adde03fef609abc4db8ba0a6d84f8cf3d83485f1ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44169db93f4cd9f2d1d26adde03fef609abc4db8ba0a6d84f8cf3d83485f1ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44169db93f4cd9f2d1d26adde03fef609abc4db8ba0a6d84f8cf3d83485f1ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44169db93f4cd9f2d1d26adde03fef609abc4db8ba0a6d84f8cf3d83485f1ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:55 np0005603435 podman[243066]: 2026-01-31 04:39:55.027674261 +0000 UTC m=+0.156188785 container init d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_rubin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:39:55 np0005603435 podman[243066]: 2026-01-31 04:39:55.040810276 +0000 UTC m=+0.169324780 container start d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_rubin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:39:55 np0005603435 podman[243066]: 2026-01-31 04:39:55.04664741 +0000 UTC m=+0.175161924 container attach d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_rubin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:39:55 np0005603435 brave_rubin[243082]: {
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:    "0": [
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:        {
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "devices": [
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "/dev/loop3"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            ],
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_name": "ceph_lv0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_size": "21470642176",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "name": "ceph_lv0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "tags": {
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cluster_name": "ceph",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.crush_device_class": "",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.encrypted": "0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.objectstore": "bluestore",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osd_id": "0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.type": "block",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.vdo": "0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.with_tpm": "0"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            },
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "type": "block",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "vg_name": "ceph_vg0"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:        }
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:    ],
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:    "1": [
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:        {
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "devices": [
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "/dev/loop4"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            ],
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_name": "ceph_lv1",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_size": "21470642176",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "name": "ceph_lv1",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "tags": {
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cluster_name": "ceph",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.crush_device_class": "",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.encrypted": "0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.objectstore": "bluestore",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osd_id": "1",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.type": "block",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.vdo": "0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.with_tpm": "0"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            },
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "type": "block",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "vg_name": "ceph_vg1"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:        }
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:    ],
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:    "2": [
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:        {
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "devices": [
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "/dev/loop5"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            ],
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_name": "ceph_lv2",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_size": "21470642176",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "name": "ceph_lv2",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "tags": {
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.cluster_name": "ceph",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.crush_device_class": "",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.encrypted": "0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.objectstore": "bluestore",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osd_id": "2",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.type": "block",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.vdo": "0",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:                "ceph.with_tpm": "0"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            },
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "type": "block",
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:            "vg_name": "ceph_vg2"
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:        }
Jan 30 23:39:55 np0005603435 brave_rubin[243082]:    ]
Jan 30 23:39:55 np0005603435 brave_rubin[243082]: }
Jan 30 23:39:55 np0005603435 systemd[1]: libpod-d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8.scope: Deactivated successfully.
Jan 30 23:39:55 np0005603435 podman[243066]: 2026-01-31 04:39:55.35220958 +0000 UTC m=+0.480724084 container died d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:39:55 np0005603435 systemd[1]: var-lib-containers-storage-overlay-44169db93f4cd9f2d1d26adde03fef609abc4db8ba0a6d84f8cf3d83485f1ca8-merged.mount: Deactivated successfully.
Jan 30 23:39:55 np0005603435 podman[243066]: 2026-01-31 04:39:55.408690518 +0000 UTC m=+0.537205032 container remove d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_rubin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:39:55 np0005603435 systemd[1]: libpod-conmon-d3fa2bfee563ce105572ccd0f26792f9ad249a6646c40c6881d0e44e00a977d8.scope: Deactivated successfully.
Jan 30 23:39:55 np0005603435 podman[243166]: 2026-01-31 04:39:55.882454448 +0000 UTC m=+0.056487158 container create f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_lalande, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:39:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:39:55.905 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:39:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:39:55.907 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:39:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:39:55.907 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:39:55 np0005603435 systemd[1]: Started libpod-conmon-f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8.scope.
Jan 30 23:39:55 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:39:55 np0005603435 podman[243166]: 2026-01-31 04:39:55.857589243 +0000 UTC m=+0.031621963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:39:55 np0005603435 podman[243166]: 2026-01-31 04:39:55.961511464 +0000 UTC m=+0.135544214 container init f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:39:55 np0005603435 podman[243166]: 2026-01-31 04:39:55.968192849 +0000 UTC m=+0.142225559 container start f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 30 23:39:55 np0005603435 podman[243166]: 2026-01-31 04:39:55.971959483 +0000 UTC m=+0.145992183 container attach f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:39:55 np0005603435 elated_lalande[243183]: 167 167
Jan 30 23:39:55 np0005603435 systemd[1]: libpod-f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8.scope: Deactivated successfully.
Jan 30 23:39:55 np0005603435 podman[243166]: 2026-01-31 04:39:55.973090551 +0000 UTC m=+0.147123251 container died f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_lalande, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:39:56 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b5c30154833b7408cfc3ef03cd3f0feab45d19441574f9accd52a40926a4b7c6-merged.mount: Deactivated successfully.
Jan 30 23:39:56 np0005603435 podman[243166]: 2026-01-31 04:39:56.016830023 +0000 UTC m=+0.190862703 container remove f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:39:56 np0005603435 systemd[1]: libpod-conmon-f428ec7f83c921b2d0fe2a36eed6af28eda946f085eb644646553c7687f11ab8.scope: Deactivated successfully.
Jan 30 23:39:56 np0005603435 podman[243209]: 2026-01-31 04:39:56.156306104 +0000 UTC m=+0.048603734 container create 71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:39:56 np0005603435 systemd[1]: Started libpod-conmon-71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f.scope.
Jan 30 23:39:56 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:39:56 np0005603435 podman[243209]: 2026-01-31 04:39:56.137102239 +0000 UTC m=+0.029399859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:39:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd94af83f5cfffa71c9d6ed3bb377825b7a2dafef62bba72a4821c4ad12adecb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd94af83f5cfffa71c9d6ed3bb377825b7a2dafef62bba72a4821c4ad12adecb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd94af83f5cfffa71c9d6ed3bb377825b7a2dafef62bba72a4821c4ad12adecb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:56 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd94af83f5cfffa71c9d6ed3bb377825b7a2dafef62bba72a4821c4ad12adecb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:39:56 np0005603435 podman[243209]: 2026-01-31 04:39:56.264453479 +0000 UTC m=+0.156751179 container init 71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 30 23:39:56 np0005603435 podman[243209]: 2026-01-31 04:39:56.314503558 +0000 UTC m=+0.206801198 container start 71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:39:56 np0005603435 podman[243209]: 2026-01-31 04:39:56.319153003 +0000 UTC m=+0.211450653 container attach 71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:39:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:57 np0005603435 lvm[243306]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:39:57 np0005603435 lvm[243305]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:39:57 np0005603435 lvm[243306]: VG ceph_vg1 finished
Jan 30 23:39:57 np0005603435 lvm[243305]: VG ceph_vg0 finished
Jan 30 23:39:57 np0005603435 lvm[243308]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:39:57 np0005603435 lvm[243308]: VG ceph_vg2 finished
Jan 30 23:39:57 np0005603435 objective_darwin[243227]: {}
Jan 30 23:39:57 np0005603435 systemd[1]: libpod-71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f.scope: Deactivated successfully.
Jan 30 23:39:57 np0005603435 systemd[1]: libpod-71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f.scope: Consumed 1.210s CPU time.
Jan 30 23:39:57 np0005603435 podman[243209]: 2026-01-31 04:39:57.216299509 +0000 UTC m=+1.108597139 container died 71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:39:57 np0005603435 systemd[1]: var-lib-containers-storage-overlay-fd94af83f5cfffa71c9d6ed3bb377825b7a2dafef62bba72a4821c4ad12adecb-merged.mount: Deactivated successfully.
Jan 30 23:39:57 np0005603435 podman[243209]: 2026-01-31 04:39:57.267872865 +0000 UTC m=+1.160170505 container remove 71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:39:57 np0005603435 systemd[1]: libpod-conmon-71178d4230f0e0f60e4e523f42e3c887332f9852d86c934b68d3d69d0129a74f.scope: Deactivated successfully.
Jan 30 23:39:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:39:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:39:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:39:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:39:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:03 np0005603435 podman[243349]: 2026-01-31 04:40:03.17243182 +0000 UTC m=+0.131841233 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 30 23:40:03 np0005603435 nova_compute[239938]: 2026-01-31 04:40:03.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:03 np0005603435 nova_compute[239938]: 2026-01-31 04:40:03.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:03 np0005603435 nova_compute[239938]: 2026-01-31 04:40:03.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:40:03 np0005603435 nova_compute[239938]: 2026-01-31 04:40:03.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:40:03 np0005603435 nova_compute[239938]: 2026-01-31 04:40:03.912 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:40:03 np0005603435 nova_compute[239938]: 2026-01-31 04:40:03.912 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:04 np0005603435 nova_compute[239938]: 2026-01-31 04:40:04.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:04 np0005603435 nova_compute[239938]: 2026-01-31 04:40:04.915 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:04 np0005603435 nova_compute[239938]: 2026-01-31 04:40:04.915 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:04 np0005603435 nova_compute[239938]: 2026-01-31 04:40:04.949 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:40:04 np0005603435 nova_compute[239938]: 2026-01-31 04:40:04.950 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:40:04 np0005603435 nova_compute[239938]: 2026-01-31 04:40:04.950 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:40:04 np0005603435 nova_compute[239938]: 2026-01-31 04:40:04.951 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:40:04 np0005603435 nova_compute[239938]: 2026-01-31 04:40:04.951 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:40:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:40:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532608833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:40:05 np0005603435 nova_compute[239938]: 2026-01-31 04:40:05.559 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:40:05 np0005603435 nova_compute[239938]: 2026-01-31 04:40:05.772 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:40:05 np0005603435 nova_compute[239938]: 2026-01-31 04:40:05.775 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5078MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:40:05 np0005603435 nova_compute[239938]: 2026-01-31 04:40:05.776 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:40:05 np0005603435 nova_compute[239938]: 2026-01-31 04:40:05.776 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:40:05 np0005603435 nova_compute[239938]: 2026-01-31 04:40:05.851 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:40:05 np0005603435 nova_compute[239938]: 2026-01-31 04:40:05.852 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:40:05 np0005603435 nova_compute[239938]: 2026-01-31 04:40:05.868 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:40:06
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:40:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:40:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1095576488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:40:06 np0005603435 nova_compute[239938]: 2026-01-31 04:40:06.402 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:40:06 np0005603435 nova_compute[239938]: 2026-01-31 04:40:06.409 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:40:06 np0005603435 nova_compute[239938]: 2026-01-31 04:40:06.431 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:40:06 np0005603435 nova_compute[239938]: 2026-01-31 04:40:06.435 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:40:06 np0005603435 nova_compute[239938]: 2026-01-31 04:40:06.435 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:40:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:07 np0005603435 podman[243420]: 2026-01-31 04:40:07.113960197 +0000 UTC m=+0.074778711 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:40:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:40:07 np0005603435 nova_compute[239938]: 2026-01-31 04:40:07.407 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:07 np0005603435 nova_compute[239938]: 2026-01-31 04:40:07.408 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:07 np0005603435 nova_compute[239938]: 2026-01-31 04:40:07.408 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:07 np0005603435 nova_compute[239938]: 2026-01-31 04:40:07.409 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:40:07 np0005603435 nova_compute[239938]: 2026-01-31 04:40:07.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:40:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 30 23:40:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 30 23:40:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 30 23:40:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:40:16 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:40:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 30 23:40:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 30 23:40:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 30 23:40:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Jan 30 23:40:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:40:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583230163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:40:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:40:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583230163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:40:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:34 np0005603435 podman[243439]: 2026-01-31 04:40:34.134149023 +0000 UTC m=+0.095348470 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 30 23:40:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:40:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:40:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:40:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:40:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:40:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:40:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:38 np0005603435 podman[243465]: 2026-01-31 04:40:38.150413389 +0000 UTC m=+0.066080246 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 30 23:40:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:40:55.906 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:40:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:40:55.906 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:40:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:40:55.906 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:40:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:40:58 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:40:58 np0005603435 podman[243629]: 2026-01-31 04:40:58.666117752 +0000 UTC m=+0.059285046 container create 3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:40:58 np0005603435 systemd[1]: Started libpod-conmon-3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388.scope.
Jan 30 23:40:58 np0005603435 podman[243629]: 2026-01-31 04:40:58.639005962 +0000 UTC m=+0.032173306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:40:58 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:40:58 np0005603435 podman[243629]: 2026-01-31 04:40:58.754849364 +0000 UTC m=+0.148016658 container init 3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:40:58 np0005603435 podman[243629]: 2026-01-31 04:40:58.762839001 +0000 UTC m=+0.156006285 container start 3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:40:58 np0005603435 practical_yonath[243646]: 167 167
Jan 30 23:40:58 np0005603435 systemd[1]: libpod-3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388.scope: Deactivated successfully.
Jan 30 23:40:58 np0005603435 podman[243629]: 2026-01-31 04:40:58.768211904 +0000 UTC m=+0.161379258 container attach 3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:40:58 np0005603435 podman[243629]: 2026-01-31 04:40:58.769497385 +0000 UTC m=+0.162664679 container died 3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:40:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-308ef3e063cdc0df6b1621a06a9679f1c72557a2e416a1694f5925f8350ef579-merged.mount: Deactivated successfully.
Jan 30 23:40:58 np0005603435 podman[243629]: 2026-01-31 04:40:58.816495466 +0000 UTC m=+0.209662750 container remove 3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:40:58 np0005603435 systemd[1]: libpod-conmon-3408d32e2d12df72fcbbb0c734d7706059507278fbdfeb24c64f54c8ebecb388.scope: Deactivated successfully.
Jan 30 23:40:59 np0005603435 podman[243669]: 2026-01-31 04:40:59.004511031 +0000 UTC m=+0.058476285 container create d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:40:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:40:59 np0005603435 systemd[1]: Started libpod-conmon-d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e.scope.
Jan 30 23:40:59 np0005603435 podman[243669]: 2026-01-31 04:40:58.981317808 +0000 UTC m=+0.035283132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:40:59 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:40:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1548b98e95b1e461249507a6baa08c0843c1828bc59114243a7b59832b34c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:40:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1548b98e95b1e461249507a6baa08c0843c1828bc59114243a7b59832b34c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:40:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1548b98e95b1e461249507a6baa08c0843c1828bc59114243a7b59832b34c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:40:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1548b98e95b1e461249507a6baa08c0843c1828bc59114243a7b59832b34c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:40:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1548b98e95b1e461249507a6baa08c0843c1828bc59114243a7b59832b34c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:40:59 np0005603435 podman[243669]: 2026-01-31 04:40:59.105336002 +0000 UTC m=+0.159301296 container init d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_solomon, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:40:59 np0005603435 podman[243669]: 2026-01-31 04:40:59.116709633 +0000 UTC m=+0.170674887 container start d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:40:59 np0005603435 podman[243669]: 2026-01-31 04:40:59.122964848 +0000 UTC m=+0.176930102 container attach d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_solomon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:40:59 np0005603435 dreamy_solomon[243686]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:40:59 np0005603435 dreamy_solomon[243686]: --> All data devices are unavailable
Jan 30 23:40:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:40:59 np0005603435 systemd[1]: libpod-d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e.scope: Deactivated successfully.
Jan 30 23:40:59 np0005603435 podman[243669]: 2026-01-31 04:40:59.618003307 +0000 UTC m=+0.671968571 container died d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 30 23:40:59 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0b1548b98e95b1e461249507a6baa08c0843c1828bc59114243a7b59832b34c1-merged.mount: Deactivated successfully.
Jan 30 23:40:59 np0005603435 podman[243669]: 2026-01-31 04:40:59.666003683 +0000 UTC m=+0.719968917 container remove d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:40:59 np0005603435 systemd[1]: libpod-conmon-d24fbae3935fd3b177d5b611cdc4b49cce405c83a669a7fd23d7abe0bfa4e51e.scope: Deactivated successfully.
Jan 30 23:41:00 np0005603435 podman[243782]: 2026-01-31 04:41:00.154346606 +0000 UTC m=+0.061166792 container create c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:41:00 np0005603435 systemd[1]: Started libpod-conmon-c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd.scope.
Jan 30 23:41:00 np0005603435 podman[243782]: 2026-01-31 04:41:00.132578319 +0000 UTC m=+0.039398515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:41:00 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:41:00 np0005603435 podman[243782]: 2026-01-31 04:41:00.245921199 +0000 UTC m=+0.152741415 container init c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_rosalind, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:41:00 np0005603435 podman[243782]: 2026-01-31 04:41:00.252598054 +0000 UTC m=+0.159418240 container start c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:41:00 np0005603435 stoic_rosalind[243799]: 167 167
Jan 30 23:41:00 np0005603435 podman[243782]: 2026-01-31 04:41:00.257390052 +0000 UTC m=+0.164210248 container attach c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:41:00 np0005603435 systemd[1]: libpod-c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd.scope: Deactivated successfully.
Jan 30 23:41:00 np0005603435 podman[243782]: 2026-01-31 04:41:00.258210872 +0000 UTC m=+0.165031068 container died c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_rosalind, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:41:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e0343912a6218cdfe69f34ac5280cc6310f778a60d996c3876093ec5968e5932-merged.mount: Deactivated successfully.
Jan 30 23:41:00 np0005603435 podman[243782]: 2026-01-31 04:41:00.302052875 +0000 UTC m=+0.208873081 container remove c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:41:00 np0005603435 systemd[1]: libpod-conmon-c4564fd14622a3b224ddb46854d2b0f03b9321269e33313abd788b984464d6fd.scope: Deactivated successfully.
Jan 30 23:41:00 np0005603435 podman[243822]: 2026-01-31 04:41:00.464669353 +0000 UTC m=+0.048379306 container create 097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:41:00 np0005603435 systemd[1]: Started libpod-conmon-097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54.scope.
Jan 30 23:41:00 np0005603435 podman[243822]: 2026-01-31 04:41:00.436799054 +0000 UTC m=+0.020509057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:41:00 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:41:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6077f8c1a06bca54ee2cf855647da824ad11e837e9856779c0431f6eb83a120/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:41:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6077f8c1a06bca54ee2cf855647da824ad11e837e9856779c0431f6eb83a120/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:41:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6077f8c1a06bca54ee2cf855647da824ad11e837e9856779c0431f6eb83a120/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:41:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6077f8c1a06bca54ee2cf855647da824ad11e837e9856779c0431f6eb83a120/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:41:00 np0005603435 podman[243822]: 2026-01-31 04:41:00.576540966 +0000 UTC m=+0.160250959 container init 097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:41:00 np0005603435 podman[243822]: 2026-01-31 04:41:00.58884653 +0000 UTC m=+0.172556483 container start 097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:41:00 np0005603435 podman[243822]: 2026-01-31 04:41:00.593466445 +0000 UTC m=+0.177176398 container attach 097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:41:00 np0005603435 stoic_wright[243838]: {
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:    "0": [
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:        {
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "devices": [
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "/dev/loop3"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            ],
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_name": "ceph_lv0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_size": "21470642176",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "name": "ceph_lv0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "tags": {
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cluster_name": "ceph",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.crush_device_class": "",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.encrypted": "0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.objectstore": "bluestore",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osd_id": "0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.type": "block",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.vdo": "0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.with_tpm": "0"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            },
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "type": "block",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "vg_name": "ceph_vg0"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:        }
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:    ],
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:    "1": [
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:        {
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "devices": [
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "/dev/loop4"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            ],
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_name": "ceph_lv1",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_size": "21470642176",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "name": "ceph_lv1",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "tags": {
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cluster_name": "ceph",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.crush_device_class": "",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.encrypted": "0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.objectstore": "bluestore",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osd_id": "1",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.type": "block",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.vdo": "0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.with_tpm": "0"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            },
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "type": "block",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "vg_name": "ceph_vg1"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:        }
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:    ],
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:    "2": [
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:        {
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "devices": [
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "/dev/loop5"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            ],
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_name": "ceph_lv2",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_size": "21470642176",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "name": "ceph_lv2",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "tags": {
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.cluster_name": "ceph",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.crush_device_class": "",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.encrypted": "0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.objectstore": "bluestore",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osd_id": "2",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.type": "block",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.vdo": "0",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:                "ceph.with_tpm": "0"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            },
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "type": "block",
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:            "vg_name": "ceph_vg2"
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:        }
Jan 30 23:41:00 np0005603435 stoic_wright[243838]:    ]
Jan 30 23:41:00 np0005603435 stoic_wright[243838]: }
Jan 30 23:41:00 np0005603435 systemd[1]: libpod-097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54.scope: Deactivated successfully.
Jan 30 23:41:00 np0005603435 podman[243822]: 2026-01-31 04:41:00.876107397 +0000 UTC m=+0.459817360 container died 097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:41:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c6077f8c1a06bca54ee2cf855647da824ad11e837e9856779c0431f6eb83a120-merged.mount: Deactivated successfully.
Jan 30 23:41:00 np0005603435 podman[243822]: 2026-01-31 04:41:00.930512221 +0000 UTC m=+0.514222174 container remove 097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:41:00 np0005603435 systemd[1]: libpod-conmon-097245efc99a57cb7aa0590bbe6ee49312b9d6e7ef0a6a59ea1871fbb5d63e54.scope: Deactivated successfully.
Jan 30 23:41:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:01 np0005603435 podman[243922]: 2026-01-31 04:41:01.426817592 +0000 UTC m=+0.057732857 container create 1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:41:01 np0005603435 systemd[1]: Started libpod-conmon-1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307.scope.
Jan 30 23:41:01 np0005603435 podman[243922]: 2026-01-31 04:41:01.40445358 +0000 UTC m=+0.035368885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:41:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:41:01 np0005603435 podman[243922]: 2026-01-31 04:41:01.516153289 +0000 UTC m=+0.147068584 container init 1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_almeida, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:41:01 np0005603435 podman[243922]: 2026-01-31 04:41:01.522860955 +0000 UTC m=+0.153776210 container start 1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:41:01 np0005603435 podman[243922]: 2026-01-31 04:41:01.526945566 +0000 UTC m=+0.157860821 container attach 1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_almeida, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:41:01 np0005603435 inspiring_almeida[243938]: 167 167
Jan 30 23:41:01 np0005603435 systemd[1]: libpod-1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307.scope: Deactivated successfully.
Jan 30 23:41:01 np0005603435 podman[243922]: 2026-01-31 04:41:01.528524995 +0000 UTC m=+0.159440250 container died 1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_almeida, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 30 23:41:01 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e33abe855ba58b3b9b11202e54f93dd0610c2489530e0af875640ad110b5d051-merged.mount: Deactivated successfully.
Jan 30 23:41:01 np0005603435 podman[243922]: 2026-01-31 04:41:01.574204963 +0000 UTC m=+0.205120228 container remove 1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:41:01 np0005603435 systemd[1]: libpod-conmon-1055d57d64c1cc5937d0ab672034f5cf4671be7e1990e6f33cd71247cf0ff307.scope: Deactivated successfully.
Jan 30 23:41:01 np0005603435 podman[243962]: 2026-01-31 04:41:01.75054465 +0000 UTC m=+0.054271912 container create ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:41:01 np0005603435 systemd[1]: Started libpod-conmon-ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7.scope.
Jan 30 23:41:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:41:01 np0005603435 podman[243962]: 2026-01-31 04:41:01.725184483 +0000 UTC m=+0.028911805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:41:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ac8fcf39d9464a599f35e3506479b2ed1297e61397c7e69c4c6a90c1f7f37d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:41:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ac8fcf39d9464a599f35e3506479b2ed1297e61397c7e69c4c6a90c1f7f37d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:41:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ac8fcf39d9464a599f35e3506479b2ed1297e61397c7e69c4c6a90c1f7f37d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:41:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ac8fcf39d9464a599f35e3506479b2ed1297e61397c7e69c4c6a90c1f7f37d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:41:01 np0005603435 podman[243962]: 2026-01-31 04:41:01.858426995 +0000 UTC m=+0.162154247 container init ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:41:01 np0005603435 podman[243962]: 2026-01-31 04:41:01.866138255 +0000 UTC m=+0.169865507 container start ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:41:01 np0005603435 podman[243962]: 2026-01-31 04:41:01.870507393 +0000 UTC m=+0.174234655 container attach ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:41:01 np0005603435 nova_compute[239938]: 2026-01-31 04:41:01.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:01 np0005603435 nova_compute[239938]: 2026-01-31 04:41:01.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 30 23:41:02 np0005603435 nova_compute[239938]: 2026-01-31 04:41:02.022 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 30 23:41:02 np0005603435 nova_compute[239938]: 2026-01-31 04:41:02.024 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:02 np0005603435 nova_compute[239938]: 2026-01-31 04:41:02.025 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 30 23:41:02 np0005603435 nova_compute[239938]: 2026-01-31 04:41:02.059 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:02 np0005603435 lvm[244058]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:41:02 np0005603435 lvm[244058]: VG ceph_vg1 finished
Jan 30 23:41:02 np0005603435 lvm[244057]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:41:02 np0005603435 lvm[244057]: VG ceph_vg0 finished
Jan 30 23:41:02 np0005603435 lvm[244060]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:41:02 np0005603435 lvm[244060]: VG ceph_vg2 finished
Jan 30 23:41:02 np0005603435 flamboyant_merkle[243979]: {}
Jan 30 23:41:02 np0005603435 systemd[1]: libpod-ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7.scope: Deactivated successfully.
Jan 30 23:41:02 np0005603435 systemd[1]: libpod-ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7.scope: Consumed 1.152s CPU time.
Jan 30 23:41:02 np0005603435 podman[243962]: 2026-01-31 04:41:02.675173512 +0000 UTC m=+0.978900764 container died ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:41:02 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b7ac8fcf39d9464a599f35e3506479b2ed1297e61397c7e69c4c6a90c1f7f37d-merged.mount: Deactivated successfully.
Jan 30 23:41:02 np0005603435 podman[243962]: 2026-01-31 04:41:02.725759942 +0000 UTC m=+1.029487174 container remove ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:41:02 np0005603435 systemd[1]: libpod-conmon-ac17dc8f6a151e44b68d2f7f147bda60eddcee450db773cf35f98eb161cfb6c7.scope: Deactivated successfully.
Jan 30 23:41:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:41:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:41:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:41:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:41:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:03 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:41:03 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.070 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.071 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.901 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.902 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.922 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.923 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.923 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.924 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:41:04 np0005603435 nova_compute[239938]: 2026-01-31 04:41:04.924 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:41:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:05 np0005603435 podman[244100]: 2026-01-31 04:41:05.191472214 +0000 UTC m=+0.156213390 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 30 23:41:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:41:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525241867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:41:05 np0005603435 nova_compute[239938]: 2026-01-31 04:41:05.484 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:41:05 np0005603435 nova_compute[239938]: 2026-01-31 04:41:05.689 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:41:05 np0005603435 nova_compute[239938]: 2026-01-31 04:41:05.691 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5093MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:41:05 np0005603435 nova_compute[239938]: 2026-01-31 04:41:05.691 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:41:05 np0005603435 nova_compute[239938]: 2026-01-31 04:41:05.692 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:41:05 np0005603435 nova_compute[239938]: 2026-01-31 04:41:05.770 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:41:05 np0005603435 nova_compute[239938]: 2026-01-31 04:41:05.771 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:41:05 np0005603435 nova_compute[239938]: 2026-01-31 04:41:05.796 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:41:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:41:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950267990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:41:06 np0005603435 nova_compute[239938]: 2026-01-31 04:41:06.345 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:41:06 np0005603435 nova_compute[239938]: 2026-01-31 04:41:06.351 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:41:06
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.meta', 'images']
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:41:06 np0005603435 nova_compute[239938]: 2026-01-31 04:41:06.369 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:41:06 np0005603435 nova_compute[239938]: 2026-01-31 04:41:06.372 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:41:06 np0005603435 nova_compute[239938]: 2026-01-31 04:41:06.372 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:41:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:41:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:41:08 np0005603435 nova_compute[239938]: 2026-01-31 04:41:08.358 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:08 np0005603435 nova_compute[239938]: 2026-01-31 04:41:08.358 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:08 np0005603435 nova_compute[239938]: 2026-01-31 04:41:08.359 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:08 np0005603435 nova_compute[239938]: 2026-01-31 04:41:08.359 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:08 np0005603435 nova_compute[239938]: 2026-01-31 04:41:08.359 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:41:08 np0005603435 nova_compute[239938]: 2026-01-31 04:41:08.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:41:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:09 np0005603435 podman[244168]: 2026-01-31 04:41:09.094964787 +0000 UTC m=+0.062523596 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 30 23:41:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5627851149001422e-06 of space, bias 4.0, pg target 0.0018753421378801707 quantized to 16 (current 16)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:41:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:41:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2954423298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:41:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:41:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2954423298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:41:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:36 np0005603435 podman[244188]: 2026-01-31 04:41:36.180040001 +0000 UTC m=+0.135487398 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:41:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:41:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:41:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:41:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:41:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:41:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:41:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:40 np0005603435 podman[244214]: 2026-01-31 04:41:40.139685182 +0000 UTC m=+0.104938873 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 30 23:41:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:41:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:41:55.906 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:41:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:41:55.907 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:41:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:41:55.908 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:41:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:41:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:42:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:03 np0005603435 podman[244328]: 2026-01-31 04:42:03.510275522 +0000 UTC m=+0.146520611 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:42:03 np0005603435 podman[244328]: 2026-01-31 04:42:03.59886236 +0000 UTC m=+0.235107369 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:42:03 np0005603435 nova_compute[239938]: 2026-01-31 04:42:03.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 30 23:42:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 30 23:42:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 30 23:42:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:42:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:42:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:42:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.940 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.942 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.976 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.977 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.977 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.978 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:42:04 np0005603435 nova_compute[239938]: 2026-01-31 04:42:04.979 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:42:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687919970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:42:05 np0005603435 nova_compute[239938]: 2026-01-31 04:42:05.529 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 30 23:42:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 30 23:42:05 np0005603435 nova_compute[239938]: 2026-01-31 04:42:05.728 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:42:05 np0005603435 nova_compute[239938]: 2026-01-31 04:42:05.730 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:42:05 np0005603435 nova_compute[239938]: 2026-01-31 04:42:05.730 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:42:05 np0005603435 nova_compute[239938]: 2026-01-31 04:42:05.731 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:42:05 np0005603435 podman[244679]: 2026-01-31 04:42:05.737882553 +0000 UTC m=+0.029003688 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:42:05 np0005603435 podman[244679]: 2026-01-31 04:42:05.841337589 +0000 UTC m=+0.132458654 container create 5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_brahmagupta, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:42:05 np0005603435 systemd[1]: Started libpod-conmon-5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1.scope.
Jan 30 23:42:06 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.013 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.014 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.078 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Refreshing inventories for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 30 23:42:06 np0005603435 podman[244679]: 2026-01-31 04:42:06.141155655 +0000 UTC m=+0.432276760 container init 5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 30 23:42:06 np0005603435 podman[244679]: 2026-01-31 04:42:06.149895621 +0000 UTC m=+0.441016656 container start 5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_brahmagupta, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:42:06 np0005603435 relaxed_brahmagupta[244696]: 167 167
Jan 30 23:42:06 np0005603435 systemd[1]: libpod-5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1.scope: Deactivated successfully.
Jan 30 23:42:06 np0005603435 conmon[244696]: conmon 5b24a10887b49792762e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1.scope/container/memory.events
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.162 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Updating ProviderTree inventory for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.163 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.183 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Refreshing aggregate associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.224 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Refreshing trait associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, traits: COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 30 23:42:06 np0005603435 podman[244679]: 2026-01-31 04:42:06.238714636 +0000 UTC m=+0.529835681 container attach 5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:42:06 np0005603435 podman[244679]: 2026-01-31 04:42:06.239439114 +0000 UTC m=+0.530560169 container died 5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.242 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:42:06
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms', '.mgr', 'backups']
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:42:06 np0005603435 systemd[1]: var-lib-containers-storage-overlay-fbbfea72faf568a372d7e9cdfa3a13c46eb7e80e8e9d31cf16631d127969c0b9-merged.mount: Deactivated successfully.
Jan 30 23:42:06 np0005603435 podman[244679]: 2026-01-31 04:42:06.692842834 +0000 UTC m=+0.983963889 container remove 5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:42:06 np0005603435 systemd[1]: libpod-conmon-5b24a10887b49792762e59706272e845b8332828addfef6c140ee45d7b755de1.scope: Deactivated successfully.
Jan 30 23:42:06 np0005603435 podman[244732]: 2026-01-31 04:42:06.833282474 +0000 UTC m=+0.432772922 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 30 23:42:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:42:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156911614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.886 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.892 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.907 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.909 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:42:06 np0005603435 nova_compute[239938]: 2026-01-31 04:42:06.909 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:42:06 np0005603435 podman[244762]: 2026-01-31 04:42:06.939047517 +0000 UTC m=+0.100905724 container create 6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jackson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:42:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:42:06 np0005603435 podman[244762]: 2026-01-31 04:42:06.874778979 +0000 UTC m=+0.036637206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:42:07 np0005603435 systemd[1]: Started libpod-conmon-6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8.scope.
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:07 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:42:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cda8470028103f003ecdf0d2d933758357a570a58146ffa45ab0365847dc4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cda8470028103f003ecdf0d2d933758357a570a58146ffa45ab0365847dc4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cda8470028103f003ecdf0d2d933758357a570a58146ffa45ab0365847dc4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cda8470028103f003ecdf0d2d933758357a570a58146ffa45ab0365847dc4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cda8470028103f003ecdf0d2d933758357a570a58146ffa45ab0365847dc4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:07 np0005603435 podman[244762]: 2026-01-31 04:42:07.108279138 +0000 UTC m=+0.270137405 container init 6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:42:07 np0005603435 podman[244762]: 2026-01-31 04:42:07.116422269 +0000 UTC m=+0.278280496 container start 6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:42:07 np0005603435 podman[244762]: 2026-01-31 04:42:07.120260594 +0000 UTC m=+0.282118901 container attach 6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:42:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:42:07 np0005603435 romantic_jackson[244782]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:42:07 np0005603435 romantic_jackson[244782]: --> All data devices are unavailable
Jan 30 23:42:07 np0005603435 systemd[1]: libpod-6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8.scope: Deactivated successfully.
Jan 30 23:42:07 np0005603435 podman[244762]: 2026-01-31 04:42:07.627324361 +0000 UTC m=+0.789182628 container died 6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jackson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:42:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 30 23:42:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 30 23:42:07 np0005603435 systemd[1]: var-lib-containers-storage-overlay-29cda8470028103f003ecdf0d2d933758357a570a58146ffa45ab0365847dc4a-merged.mount: Deactivated successfully.
Jan 30 23:42:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 30 23:42:07 np0005603435 podman[244762]: 2026-01-31 04:42:07.688327368 +0000 UTC m=+0.850185595 container remove 6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:42:07 np0005603435 systemd[1]: libpod-conmon-6f5484d8b1693d040a22324683e43947713947e7334460c2492ec779dd238cc8.scope: Deactivated successfully.
Jan 30 23:42:07 np0005603435 nova_compute[239938]: 2026-01-31 04:42:07.904 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:07 np0005603435 nova_compute[239938]: 2026-01-31 04:42:07.905 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:07 np0005603435 nova_compute[239938]: 2026-01-31 04:42:07.945 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:08 np0005603435 podman[244876]: 2026-01-31 04:42:08.194949252 +0000 UTC m=+0.105334983 container create e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:42:08 np0005603435 podman[244876]: 2026-01-31 04:42:08.128509851 +0000 UTC m=+0.038895642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:42:08 np0005603435 systemd[1]: Started libpod-conmon-e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c.scope.
Jan 30 23:42:08 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:42:08 np0005603435 podman[244876]: 2026-01-31 04:42:08.32233841 +0000 UTC m=+0.232724191 container init e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:42:08 np0005603435 podman[244876]: 2026-01-31 04:42:08.331353402 +0000 UTC m=+0.241739113 container start e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:42:08 np0005603435 quirky_poitras[244892]: 167 167
Jan 30 23:42:08 np0005603435 podman[244876]: 2026-01-31 04:42:08.335956446 +0000 UTC m=+0.246342177 container attach e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:42:08 np0005603435 systemd[1]: libpod-e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c.scope: Deactivated successfully.
Jan 30 23:42:08 np0005603435 podman[244876]: 2026-01-31 04:42:08.33735088 +0000 UTC m=+0.247736611 container died e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:42:08 np0005603435 systemd[1]: var-lib-containers-storage-overlay-76b663db96dd08ac357161926e4bba1844fb17e2e7f16cd86655e604fc350742-merged.mount: Deactivated successfully.
Jan 30 23:42:08 np0005603435 podman[244876]: 2026-01-31 04:42:08.370558971 +0000 UTC m=+0.280944662 container remove e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:42:08 np0005603435 systemd[1]: libpod-conmon-e9dbaa36b52c4005568c28e89e6f74bb3ecd07f81cbf49fe158355afc361558c.scope: Deactivated successfully.
Jan 30 23:42:08 np0005603435 podman[244917]: 2026-01-31 04:42:08.548016735 +0000 UTC m=+0.061668815 container create af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:42:08 np0005603435 systemd[1]: Started libpod-conmon-af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79.scope.
Jan 30 23:42:08 np0005603435 podman[244917]: 2026-01-31 04:42:08.521953581 +0000 UTC m=+0.035605661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:42:08 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:42:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9e490f8148569385ce0b5b4177ad6f472c8bd0c9f37c751662a956e904073ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9e490f8148569385ce0b5b4177ad6f472c8bd0c9f37c751662a956e904073ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9e490f8148569385ce0b5b4177ad6f472c8bd0c9f37c751662a956e904073ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:08 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9e490f8148569385ce0b5b4177ad6f472c8bd0c9f37c751662a956e904073ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:08 np0005603435 podman[244917]: 2026-01-31 04:42:08.658305989 +0000 UTC m=+0.171958099 container init af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:42:08 np0005603435 podman[244917]: 2026-01-31 04:42:08.672058969 +0000 UTC m=+0.185711049 container start af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 30 23:42:08 np0005603435 podman[244917]: 2026-01-31 04:42:08.676278043 +0000 UTC m=+0.189930183 container attach af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:42:08 np0005603435 nova_compute[239938]: 2026-01-31 04:42:08.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:08 np0005603435 nova_compute[239938]: 2026-01-31 04:42:08.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:08 np0005603435 nova_compute[239938]: 2026-01-31 04:42:08.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:42:08 np0005603435 objective_solomon[244934]: {
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:    "0": [
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:        {
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "devices": [
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "/dev/loop3"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            ],
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_name": "ceph_lv0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_size": "21470642176",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "name": "ceph_lv0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "tags": {
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cluster_name": "ceph",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.crush_device_class": "",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.encrypted": "0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.objectstore": "bluestore",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osd_id": "0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.type": "block",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.vdo": "0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.with_tpm": "0"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            },
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "type": "block",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "vg_name": "ceph_vg0"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:        }
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:    ],
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:    "1": [
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:        {
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "devices": [
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "/dev/loop4"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            ],
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_name": "ceph_lv1",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_size": "21470642176",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "name": "ceph_lv1",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "tags": {
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cluster_name": "ceph",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.crush_device_class": "",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.encrypted": "0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.objectstore": "bluestore",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osd_id": "1",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.type": "block",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.vdo": "0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.with_tpm": "0"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            },
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "type": "block",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "vg_name": "ceph_vg1"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:        }
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:    ],
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:    "2": [
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:        {
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "devices": [
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "/dev/loop5"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            ],
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_name": "ceph_lv2",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_size": "21470642176",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "name": "ceph_lv2",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "tags": {
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.cluster_name": "ceph",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.crush_device_class": "",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.encrypted": "0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.objectstore": "bluestore",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osd_id": "2",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.type": "block",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.vdo": "0",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:                "ceph.with_tpm": "0"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            },
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "type": "block",
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:            "vg_name": "ceph_vg2"
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:        }
Jan 30 23:42:08 np0005603435 objective_solomon[244934]:    ]
Jan 30 23:42:08 np0005603435 objective_solomon[244934]: }
Jan 30 23:42:08 np0005603435 systemd[1]: libpod-af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79.scope: Deactivated successfully.
Jan 30 23:42:08 np0005603435 podman[244917]: 2026-01-31 04:42:08.991766947 +0000 UTC m=+0.505419027 container died af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:42:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a9e490f8148569385ce0b5b4177ad6f472c8bd0c9f37c751662a956e904073ae-merged.mount: Deactivated successfully.
Jan 30 23:42:09 np0005603435 podman[244917]: 2026-01-31 04:42:09.038154073 +0000 UTC m=+0.551806123 container remove af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 30 23:42:09 np0005603435 systemd[1]: libpod-conmon-af96414c2c8c0c4d805336d95708ce133f7fe6379748df0ef10bb8ad4b2e5a79.scope: Deactivated successfully.
Jan 30 23:42:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 30 23:42:09 np0005603435 podman[245019]: 2026-01-31 04:42:09.526549689 +0000 UTC m=+0.050696734 container create 5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:42:09 np0005603435 systemd[1]: Started libpod-conmon-5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093.scope.
Jan 30 23:42:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:42:09 np0005603435 podman[245019]: 2026-01-31 04:42:09.49987918 +0000 UTC m=+0.024026315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:42:09 np0005603435 podman[245019]: 2026-01-31 04:42:09.60672285 +0000 UTC m=+0.130869995 container init 5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 30 23:42:09 np0005603435 podman[245019]: 2026-01-31 04:42:09.613461516 +0000 UTC m=+0.137608611 container start 5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:42:09 np0005603435 podman[245019]: 2026-01-31 04:42:09.616770778 +0000 UTC m=+0.140917923 container attach 5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:42:09 np0005603435 gallant_northcutt[245035]: 167 167
Jan 30 23:42:09 np0005603435 systemd[1]: libpod-5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093.scope: Deactivated successfully.
Jan 30 23:42:09 np0005603435 podman[245019]: 2026-01-31 04:42:09.618508101 +0000 UTC m=+0.142655156 container died 5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:42:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 30 23:42:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay-99d886f467290858fb5cb789f9fdeec7ad46ec5cb236751b313a5b6a55197dc2-merged.mount: Deactivated successfully.
Jan 30 23:42:09 np0005603435 podman[245019]: 2026-01-31 04:42:09.658734124 +0000 UTC m=+0.182881179 container remove 5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:42:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 30 23:42:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 30 23:42:09 np0005603435 systemd[1]: libpod-conmon-5e583ba4b29e089f4061d2d5f0ee742bf3b9338c2cc0c9e9e0f952d464644093.scope: Deactivated successfully.
Jan 30 23:42:09 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 30 23:42:09 np0005603435 podman[245058]: 2026-01-31 04:42:09.828798756 +0000 UTC m=+0.058418044 container create 52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 30 23:42:09 np0005603435 systemd[1]: Started libpod-conmon-52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf.scope.
Jan 30 23:42:09 np0005603435 nova_compute[239938]: 2026-01-31 04:42:09.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:09 np0005603435 nova_compute[239938]: 2026-01-31 04:42:09.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:42:09 np0005603435 podman[245058]: 2026-01-31 04:42:09.8034667 +0000 UTC m=+0.033086038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:42:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:42:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771af3bf9dd6611b493ad0d9654a1799156a2353218f42eb6d3ee622e4b810bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771af3bf9dd6611b493ad0d9654a1799156a2353218f42eb6d3ee622e4b810bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771af3bf9dd6611b493ad0d9654a1799156a2353218f42eb6d3ee622e4b810bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771af3bf9dd6611b493ad0d9654a1799156a2353218f42eb6d3ee622e4b810bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:42:09 np0005603435 podman[245058]: 2026-01-31 04:42:09.940022273 +0000 UTC m=+0.169641611 container init 52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:42:09 np0005603435 podman[245058]: 2026-01-31 04:42:09.948956074 +0000 UTC m=+0.178575362 container start 52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:42:09 np0005603435 podman[245058]: 2026-01-31 04:42:09.953348083 +0000 UTC m=+0.182967421 container attach 52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 30 23:42:10 np0005603435 lvm[245160]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:42:10 np0005603435 lvm[245159]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:42:10 np0005603435 lvm[245160]: VG ceph_vg1 finished
Jan 30 23:42:10 np0005603435 lvm[245159]: VG ceph_vg0 finished
Jan 30 23:42:10 np0005603435 lvm[245163]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:42:10 np0005603435 lvm[245163]: VG ceph_vg2 finished
Jan 30 23:42:10 np0005603435 podman[245150]: 2026-01-31 04:42:10.69198242 +0000 UTC m=+0.078857839 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:42:10 np0005603435 loving_chandrasekhar[245075]: {}
Jan 30 23:42:10 np0005603435 systemd[1]: libpod-52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf.scope: Deactivated successfully.
Jan 30 23:42:10 np0005603435 systemd[1]: libpod-52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf.scope: Consumed 1.153s CPU time.
Jan 30 23:42:10 np0005603435 podman[245058]: 2026-01-31 04:42:10.772748596 +0000 UTC m=+1.002367894 container died 52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:42:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-771af3bf9dd6611b493ad0d9654a1799156a2353218f42eb6d3ee622e4b810bc-merged.mount: Deactivated successfully.
Jan 30 23:42:10 np0005603435 podman[245058]: 2026-01-31 04:42:10.824069303 +0000 UTC m=+1.053688601 container remove 52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chandrasekhar, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:42:10 np0005603435 systemd[1]: libpod-conmon-52d0f76b0b94b2f4a4ac5bd58ae5b3a9ea845d2654194a8a78b245c90fb92dbf.scope: Deactivated successfully.
Jan 30 23:42:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:42:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:42:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 13 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 25 op/s
Jan 30 23:42:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:42:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 51 op/s
Jan 30 23:42:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 30 23:42:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 30 23:42:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 30 23:42:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.4 MiB/s wr, 39 op/s
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659223890990774 of space, bias 1.0, pg target 0.19977671672972322 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4820702491875015e-06 of space, bias 4.0, pg target 0.0017784842990250017 quantized to 16 (current 16)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:42:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.1 MiB/s wr, 36 op/s
Jan 30 23:42:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 MiB/s wr, 30 op/s
Jan 30 23:42:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.8 MiB/s wr, 22 op/s
Jan 30 23:42:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:25 np0005603435 ceph-osd[85822]: bluestore.MempoolThread fragmentation_score=0.000116 took=0.000018s
Jan 30 23:42:25 np0005603435 ceph-osd[86873]: bluestore.MempoolThread fragmentation_score=0.000147 took=0.000043s
Jan 30 23:42:25 np0005603435 ceph-osd[87920]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000036s
Jan 30 23:42:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:42:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 30 23:42:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 30 23:42:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 30 23:42:31 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:42:31.205 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:42:31 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:42:31.207 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:42:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 30 23:42:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 30 23:42:32 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 30 23:42:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:42:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789676389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:42:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:42:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789676389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:42:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 767 B/s wr, 1 op/s
Jan 30 23:42:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 767 B/s wr, 1 op/s
Jan 30 23:42:35 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:42:35.209 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:42:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:42:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:42:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:42:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:42:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:42:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:42:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1023 B/s wr, 13 op/s
Jan 30 23:42:37 np0005603435 podman[245216]: 2026-01-31 04:42:37.141529534 +0000 UTC m=+0.097767396 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 30 23:42:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.4 KiB/s wr, 15 op/s
Jan 30 23:42:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 30 23:42:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 30 23:42:39 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 30 23:42:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 614 B/s wr, 14 op/s
Jan 30 23:42:41 np0005603435 podman[245243]: 2026-01-31 04:42:41.115149151 +0000 UTC m=+0.086192540 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 30 23:42:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:42:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/745837211' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:42:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:42:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/745837211' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:42:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:42:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3470664690' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:42:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:42:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3470664690' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:42:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 22 op/s
Jan 30 23:42:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 30 23:42:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 30 23:42:44 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 30 23:42:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.1 KiB/s wr, 16 op/s
Jan 30 23:42:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 30 23:42:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 30 23:42:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 30 23:42:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.8 KiB/s wr, 43 op/s
Jan 30 23:42:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 3.1 KiB/s wr, 43 op/s
Jan 30 23:42:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:42:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3827289548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:42:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:42:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3827289548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:42:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:42:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132954032' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:42:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:42:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132954032' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:42:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 35 op/s
Jan 30 23:42:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 30 23:42:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 30 23:42:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 30 23:42:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:42:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2617432999' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:42:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:42:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2617432999' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:42:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 6.7 KiB/s wr, 135 op/s
Jan 30 23:42:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:42:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4055031593' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:42:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:42:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4055031593' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:42:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 112 op/s
Jan 30 23:42:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:42:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 30 23:42:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 30 23:42:55 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 30 23:42:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:42:55.907 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:42:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:42:55.908 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:42:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:42:55.908 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:42:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 5.0 KiB/s wr, 141 op/s
Jan 30 23:42:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 4.7 KiB/s wr, 138 op/s
Jan 30 23:43:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 30 23:43:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 30 23:43:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 30 23:43:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1023 B/s wr, 38 op/s
Jan 30 23:43:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 30 23:43:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 30 23:43:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 30 23:43:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 30 23:43:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 30 23:43:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 30 23:43:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3092159503' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3092159503' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 66 op/s
Jan 30 23:43:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2570995809' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2570995809' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 30 23:43:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 30 23:43:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 30 23:43:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 30 23:43:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 30 23:43:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 30 23:43:04 np0005603435 nova_compute[239938]: 2026-01-31 04:43:04.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 7.0 KiB/s wr, 93 op/s
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.625008) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834585625039, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 253, "total_data_size": 3567443, "memory_usage": 3620560, "flush_reason": "Manual Compaction"}
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834585664036, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3487647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16409, "largest_seqno": 18557, "table_properties": {"data_size": 3477675, "index_size": 6402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19829, "raw_average_key_size": 20, "raw_value_size": 3457840, "raw_average_value_size": 3528, "num_data_blocks": 288, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769834374, "oldest_key_time": 1769834374, "file_creation_time": 1769834585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 39109 microseconds, and 8928 cpu microseconds.
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.664107) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3487647 bytes OK
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.664139) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.667168) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.667190) EVENT_LOG_v1 {"time_micros": 1769834585667183, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.667242) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3558387, prev total WAL file size 3558387, number of live WAL files 2.
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.668774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3405KB)], [38(7669KB)]
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834585668841, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11341085, "oldest_snapshot_seqno": -1}
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4514 keys, 9538405 bytes, temperature: kUnknown
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834585771931, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9538405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9504257, "index_size": 21750, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 109281, "raw_average_key_size": 24, "raw_value_size": 9418814, "raw_average_value_size": 2086, "num_data_blocks": 920, "num_entries": 4514, "num_filter_entries": 4514, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769834585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.772622) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9538405 bytes
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.775492) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.5 rd, 92.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5034, records dropped: 520 output_compression: NoCompression
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.775510) EVENT_LOG_v1 {"time_micros": 1769834585775500, "job": 18, "event": "compaction_finished", "compaction_time_micros": 103569, "compaction_time_cpu_micros": 29052, "output_level": 6, "num_output_files": 1, "total_output_size": 9538405, "num_input_records": 5034, "num_output_records": 4514, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834585776298, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834585777441, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.668604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.777485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.777493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.777496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.777499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:05 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:05.777502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:05 np0005603435 nova_compute[239938]: 2026-01-31 04:43:05.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:05 np0005603435 nova_compute[239938]: 2026-01-31 04:43:05.917 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:43:05 np0005603435 nova_compute[239938]: 2026-01-31 04:43:05.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:43:05 np0005603435 nova_compute[239938]: 2026-01-31 04:43:05.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:43:05 np0005603435 nova_compute[239938]: 2026-01-31 04:43:05.919 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:43:05 np0005603435 nova_compute[239938]: 2026-01-31 04:43:05.919 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:43:06
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta']
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:43:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:43:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557639202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:43:06 np0005603435 nova_compute[239938]: 2026-01-31 04:43:06.430 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:43:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 30 23:43:06 np0005603435 nova_compute[239938]: 2026-01-31 04:43:06.636 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:43:06 np0005603435 nova_compute[239938]: 2026-01-31 04:43:06.637 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:43:06 np0005603435 nova_compute[239938]: 2026-01-31 04:43:06.638 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:43:06 np0005603435 nova_compute[239938]: 2026-01-31 04:43:06.638 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:43:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 30 23:43:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 30 23:43:06 np0005603435 nova_compute[239938]: 2026-01-31 04:43:06.706 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:43:06 np0005603435 nova_compute[239938]: 2026-01-31 04:43:06.706 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:43:06 np0005603435 nova_compute[239938]: 2026-01-31 04:43:06.735 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:43:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 67 op/s
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:43:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:43:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:43:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3207606957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:43:07 np0005603435 nova_compute[239938]: 2026-01-31 04:43:07.300 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:43:07 np0005603435 nova_compute[239938]: 2026-01-31 04:43:07.307 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:43:07 np0005603435 nova_compute[239938]: 2026-01-31 04:43:07.324 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:43:07 np0005603435 nova_compute[239938]: 2026-01-31 04:43:07.327 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:43:07 np0005603435 nova_compute[239938]: 2026-01-31 04:43:07.327 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:43:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 30 23:43:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 30 23:43:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 30 23:43:08 np0005603435 podman[245309]: 2026-01-31 04:43:08.129604053 +0000 UTC m=+0.091430729 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:43:08 np0005603435 nova_compute[239938]: 2026-01-31 04:43:08.323 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:08 np0005603435 nova_compute[239938]: 2026-01-31 04:43:08.323 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:08 np0005603435 nova_compute[239938]: 2026-01-31 04:43:08.324 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:43:08 np0005603435 nova_compute[239938]: 2026-01-31 04:43:08.324 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:43:08 np0005603435 nova_compute[239938]: 2026-01-31 04:43:08.338 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:43:08 np0005603435 nova_compute[239938]: 2026-01-31 04:43:08.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 7.3 KiB/s wr, 174 op/s
Jan 30 23:43:09 np0005603435 nova_compute[239938]: 2026-01-31 04:43:09.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:09 np0005603435 nova_compute[239938]: 2026-01-31 04:43:09.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:09 np0005603435 nova_compute[239938]: 2026-01-31 04:43:09.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:09 np0005603435 nova_compute[239938]: 2026-01-31 04:43:09.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:43:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 30 23:43:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 30 23:43:10 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 30 23:43:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270627127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270627127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:10 np0005603435 nova_compute[239938]: 2026-01-31 04:43:10.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:43:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 6.0 KiB/s wr, 165 op/s
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:43:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:43:11 np0005603435 podman[245440]: 2026-01-31 04:43:11.761751755 +0000 UTC m=+0.067972589 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 30 23:43:12 np0005603435 podman[245501]: 2026-01-31 04:43:12.062319531 +0000 UTC m=+0.062324284 container create 717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:43:12 np0005603435 systemd[1]: Started libpod-conmon-717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70.scope.
Jan 30 23:43:12 np0005603435 podman[245501]: 2026-01-31 04:43:12.035987428 +0000 UTC m=+0.035992221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:43:12 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:43:12 np0005603435 podman[245501]: 2026-01-31 04:43:12.159108777 +0000 UTC m=+0.159113540 container init 717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:43:12 np0005603435 podman[245501]: 2026-01-31 04:43:12.167336417 +0000 UTC m=+0.167341160 container start 717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:43:12 np0005603435 podman[245501]: 2026-01-31 04:43:12.171466353 +0000 UTC m=+0.171471096 container attach 717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:43:12 np0005603435 happy_lumiere[245517]: 167 167
Jan 30 23:43:12 np0005603435 systemd[1]: libpod-717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70.scope: Deactivated successfully.
Jan 30 23:43:12 np0005603435 podman[245501]: 2026-01-31 04:43:12.174408188 +0000 UTC m=+0.174412931 container died 717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lumiere, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:43:12 np0005603435 systemd[1]: var-lib-containers-storage-overlay-34409bd65e58c76ba7b1993a2b51151be0655205a78058ad6f5cdfc65ebfc02d-merged.mount: Deactivated successfully.
Jan 30 23:43:12 np0005603435 podman[245501]: 2026-01-31 04:43:12.238688132 +0000 UTC m=+0.238692875 container remove 717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:43:12 np0005603435 systemd[1]: libpod-conmon-717b206ba7ad1e9383853ddb10db816adf2c14fe7c1fee2bbf9979badc501a70.scope: Deactivated successfully.
Jan 30 23:43:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 30 23:43:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 30 23:43:12 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 30 23:43:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:43:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:43:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:43:12 np0005603435 podman[245544]: 2026-01-31 04:43:12.433716318 +0000 UTC m=+0.059809220 container create 637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_volhard, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:43:12 np0005603435 systemd[1]: Started libpod-conmon-637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7.scope.
Jan 30 23:43:12 np0005603435 podman[245544]: 2026-01-31 04:43:12.405472876 +0000 UTC m=+0.031565838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:43:12 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:43:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab49daae52b2fa86f52fd087ab65d6df1ef19f672f88fa276bd82f14ee777965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab49daae52b2fa86f52fd087ab65d6df1ef19f672f88fa276bd82f14ee777965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab49daae52b2fa86f52fd087ab65d6df1ef19f672f88fa276bd82f14ee777965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab49daae52b2fa86f52fd087ab65d6df1ef19f672f88fa276bd82f14ee777965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab49daae52b2fa86f52fd087ab65d6df1ef19f672f88fa276bd82f14ee777965/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:12 np0005603435 podman[245544]: 2026-01-31 04:43:12.534071664 +0000 UTC m=+0.160164576 container init 637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:43:12 np0005603435 podman[245544]: 2026-01-31 04:43:12.542034668 +0000 UTC m=+0.168127580 container start 637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_volhard, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:43:12 np0005603435 podman[245544]: 2026-01-31 04:43:12.551065979 +0000 UTC m=+0.177158891 container attach 637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_volhard, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 30 23:43:12 np0005603435 great_volhard[245561]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:43:12 np0005603435 great_volhard[245561]: --> All data devices are unavailable
Jan 30 23:43:12 np0005603435 systemd[1]: libpod-637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7.scope: Deactivated successfully.
Jan 30 23:43:13 np0005603435 conmon[245561]: conmon 637932b4df52e4547dfc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7.scope/container/memory.events
Jan 30 23:43:13 np0005603435 podman[245544]: 2026-01-31 04:43:13.000942224 +0000 UTC m=+0.627035126 container died 637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_volhard, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:43:13 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ab49daae52b2fa86f52fd087ab65d6df1ef19f672f88fa276bd82f14ee777965-merged.mount: Deactivated successfully.
Jan 30 23:43:13 np0005603435 podman[245544]: 2026-01-31 04:43:13.053094417 +0000 UTC m=+0.679187329 container remove 637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_volhard, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:43:13 np0005603435 systemd[1]: libpod-conmon-637932b4df52e4547dfc0b025897b6c2a08c36ddb303979fc49adbfc365cfde7.scope: Deactivated successfully.
Jan 30 23:43:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 7.3 KiB/s wr, 193 op/s
Jan 30 23:43:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 30 23:43:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 30 23:43:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 30 23:43:13 np0005603435 podman[245657]: 2026-01-31 04:43:13.565552343 +0000 UTC m=+0.110159199 container create 4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:43:13 np0005603435 podman[245657]: 2026-01-31 04:43:13.486184403 +0000 UTC m=+0.030791309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:43:13 np0005603435 systemd[1]: Started libpod-conmon-4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8.scope.
Jan 30 23:43:13 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:43:13 np0005603435 podman[245657]: 2026-01-31 04:43:13.66319167 +0000 UTC m=+0.207798606 container init 4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nash, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:43:13 np0005603435 podman[245657]: 2026-01-31 04:43:13.668934457 +0000 UTC m=+0.213541303 container start 4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nash, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:43:13 np0005603435 elastic_nash[245674]: 167 167
Jan 30 23:43:13 np0005603435 systemd[1]: libpod-4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8.scope: Deactivated successfully.
Jan 30 23:43:13 np0005603435 podman[245657]: 2026-01-31 04:43:13.68901865 +0000 UTC m=+0.233625496 container attach 4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:43:13 np0005603435 podman[245657]: 2026-01-31 04:43:13.689488672 +0000 UTC m=+0.234095508 container died 4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nash, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:43:13 np0005603435 systemd[1]: var-lib-containers-storage-overlay-99e1f1e16e78d218599a3bd775f0896daec04a10e3bb81bdc49f715c04c2e720-merged.mount: Deactivated successfully.
Jan 30 23:43:13 np0005603435 podman[245657]: 2026-01-31 04:43:13.758091967 +0000 UTC m=+0.302698813 container remove 4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:43:13 np0005603435 systemd[1]: libpod-conmon-4405a878694db39390571fb075accffe58e914adabf0ef4cd1b9b34a87ffddd8.scope: Deactivated successfully.
Jan 30 23:43:13 np0005603435 podman[245698]: 2026-01-31 04:43:13.920956651 +0000 UTC m=+0.049050295 container create 039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goldstine, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:43:13 np0005603435 systemd[1]: Started libpod-conmon-039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd.scope.
Jan 30 23:43:13 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:43:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4157af145258483b43323a380b2f1ae0b0a6243482ce6d1d2d94d311e644e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:13 np0005603435 podman[245698]: 2026-01-31 04:43:13.895041429 +0000 UTC m=+0.023135123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:43:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4157af145258483b43323a380b2f1ae0b0a6243482ce6d1d2d94d311e644e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4157af145258483b43323a380b2f1ae0b0a6243482ce6d1d2d94d311e644e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:13 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4157af145258483b43323a380b2f1ae0b0a6243482ce6d1d2d94d311e644e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:14 np0005603435 podman[245698]: 2026-01-31 04:43:14.020284221 +0000 UTC m=+0.148377925 container init 039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:43:14 np0005603435 podman[245698]: 2026-01-31 04:43:14.029131558 +0000 UTC m=+0.157225192 container start 039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:43:14 np0005603435 podman[245698]: 2026-01-31 04:43:14.032574736 +0000 UTC m=+0.160668380 container attach 039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goldstine, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]: {
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:    "0": [
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:        {
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "devices": [
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "/dev/loop3"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            ],
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_name": "ceph_lv0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_size": "21470642176",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "name": "ceph_lv0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "tags": {
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cluster_name": "ceph",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.crush_device_class": "",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.encrypted": "0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.objectstore": "bluestore",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osd_id": "0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.type": "block",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.vdo": "0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.with_tpm": "0"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            },
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "type": "block",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "vg_name": "ceph_vg0"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:        }
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:    ],
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:    "1": [
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:        {
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "devices": [
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "/dev/loop4"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            ],
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_name": "ceph_lv1",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_size": "21470642176",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "name": "ceph_lv1",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "tags": {
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cluster_name": "ceph",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.crush_device_class": "",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.encrypted": "0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.objectstore": "bluestore",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osd_id": "1",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.type": "block",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.vdo": "0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.with_tpm": "0"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            },
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "type": "block",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "vg_name": "ceph_vg1"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:        }
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:    ],
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:    "2": [
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:        {
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "devices": [
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "/dev/loop5"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            ],
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_name": "ceph_lv2",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_size": "21470642176",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "name": "ceph_lv2",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "tags": {
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.cluster_name": "ceph",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.crush_device_class": "",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.encrypted": "0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.objectstore": "bluestore",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osd_id": "2",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.type": "block",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.vdo": "0",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:                "ceph.with_tpm": "0"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            },
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "type": "block",
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:            "vg_name": "ceph_vg2"
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:        }
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]:    ]
Jan 30 23:43:14 np0005603435 amazing_goldstine[245715]: }
Jan 30 23:43:14 np0005603435 systemd[1]: libpod-039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd.scope: Deactivated successfully.
Jan 30 23:43:14 np0005603435 podman[245698]: 2026-01-31 04:43:14.333503902 +0000 UTC m=+0.461597546 container died 039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:43:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 30 23:43:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 30 23:43:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 30 23:43:14 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5d4157af145258483b43323a380b2f1ae0b0a6243482ce6d1d2d94d311e644e4-merged.mount: Deactivated successfully.
Jan 30 23:43:14 np0005603435 podman[245698]: 2026-01-31 04:43:14.397415256 +0000 UTC m=+0.525508900 container remove 039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:43:14 np0005603435 systemd[1]: libpod-conmon-039acc7c94720e75532fff87d9d7a8ca19ce72e2662b68e3a56b7d645c25d9cd.scope: Deactivated successfully.
Jan 30 23:43:14 np0005603435 podman[245799]: 2026-01-31 04:43:14.848056511 +0000 UTC m=+0.033466267 container create 1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:43:14 np0005603435 systemd[1]: Started libpod-conmon-1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5.scope.
Jan 30 23:43:14 np0005603435 podman[245799]: 2026-01-31 04:43:14.832206925 +0000 UTC m=+0.017616691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:43:14 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:43:14 np0005603435 podman[245799]: 2026-01-31 04:43:14.962738493 +0000 UTC m=+0.148148279 container init 1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 30 23:43:14 np0005603435 podman[245799]: 2026-01-31 04:43:14.971480157 +0000 UTC m=+0.156889903 container start 1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:43:14 np0005603435 podman[245799]: 2026-01-31 04:43:14.974827842 +0000 UTC m=+0.160237628 container attach 1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tesla, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:43:14 np0005603435 great_tesla[245815]: 167 167
Jan 30 23:43:14 np0005603435 systemd[1]: libpod-1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5.scope: Deactivated successfully.
Jan 30 23:43:14 np0005603435 podman[245799]: 2026-01-31 04:43:14.976136246 +0000 UTC m=+0.161545992 container died 1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:43:15 np0005603435 systemd[1]: var-lib-containers-storage-overlay-84c60b627e5dfed0f00f033ea4451b23d198a3bf95a525ef743f625605e167eb-merged.mount: Deactivated successfully.
Jan 30 23:43:15 np0005603435 podman[245799]: 2026-01-31 04:43:15.018405507 +0000 UTC m=+0.203815293 container remove 1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tesla, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:43:15 np0005603435 systemd[1]: libpod-conmon-1b5e2476e85ad9a1fddb5cf144066f0f1e4016617e7f9a5cd97d73ae869ab6c5.scope: Deactivated successfully.
Jan 30 23:43:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.5 KiB/s wr, 90 op/s
Jan 30 23:43:15 np0005603435 podman[245839]: 2026-01-31 04:43:15.2023009 +0000 UTC m=+0.057728857 container create 13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:43:15 np0005603435 systemd[1]: Started libpod-conmon-13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b.scope.
Jan 30 23:43:15 np0005603435 podman[245839]: 2026-01-31 04:43:15.179011274 +0000 UTC m=+0.034439271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:43:15 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:43:15 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fcc3ae40634610b6b7c667babde96733d7cef659897feb63691e1685f2634c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:15 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fcc3ae40634610b6b7c667babde96733d7cef659897feb63691e1685f2634c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:15 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fcc3ae40634610b6b7c667babde96733d7cef659897feb63691e1685f2634c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:15 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fcc3ae40634610b6b7c667babde96733d7cef659897feb63691e1685f2634c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:43:15 np0005603435 podman[245839]: 2026-01-31 04:43:15.31650382 +0000 UTC m=+0.171931787 container init 13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:43:15 np0005603435 podman[245839]: 2026-01-31 04:43:15.326121136 +0000 UTC m=+0.181549063 container start 13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bassi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:43:15 np0005603435 podman[245839]: 2026-01-31 04:43:15.329539934 +0000 UTC m=+0.184967911 container attach 13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:43:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 30 23:43:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 30 23:43:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 30 23:43:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 30 23:43:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 30 23:43:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 30 23:43:15 np0005603435 lvm[245934]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:43:15 np0005603435 lvm[245934]: VG ceph_vg0 finished
Jan 30 23:43:15 np0005603435 lvm[245936]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:43:15 np0005603435 lvm[245936]: VG ceph_vg1 finished
Jan 30 23:43:15 np0005603435 lvm[245938]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:43:15 np0005603435 lvm[245938]: VG ceph_vg2 finished
Jan 30 23:43:16 np0005603435 lvm[245939]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:43:16 np0005603435 lvm[245939]: VG ceph_vg0 finished
Jan 30 23:43:16 np0005603435 great_bassi[245856]: {}
Jan 30 23:43:16 np0005603435 systemd[1]: libpod-13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b.scope: Deactivated successfully.
Jan 30 23:43:16 np0005603435 systemd[1]: libpod-13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b.scope: Consumed 1.108s CPU time.
Jan 30 23:43:16 np0005603435 podman[245839]: 2026-01-31 04:43:16.140585743 +0000 UTC m=+0.996013690 container died 13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:43:16 np0005603435 systemd[1]: var-lib-containers-storage-overlay-80fcc3ae40634610b6b7c667babde96733d7cef659897feb63691e1685f2634c-merged.mount: Deactivated successfully.
Jan 30 23:43:16 np0005603435 podman[245839]: 2026-01-31 04:43:16.334850041 +0000 UTC m=+1.190277948 container remove 13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bassi, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:43:16 np0005603435 systemd[1]: libpod-conmon-13374455835fd203ac6eccab369473260067dde81c3c6f9acbdceea64e3d382b.scope: Deactivated successfully.
Jan 30 23:43:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:43:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:43:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:43:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 30 23:43:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:43:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 30 23:43:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.864747826596788e-06 of space, bias 1.0, pg target 0.0005594243479790364 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660119843077767 of space, bias 1.0, pg target 0.199803595292333 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.95623069465599e-07 of space, bias 4.0, pg target 0.0011947476833587187 quantized to 16 (current 16)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:43:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 52 op/s
Jan 30 23:43:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:43:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:43:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3183935431' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3183935431' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 5.9 KiB/s wr, 113 op/s
Jan 30 23:43:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 30 23:43:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 30 23:43:19 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 30 23:43:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 30 23:43:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 30 23:43:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 30 23:43:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974375774' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974375774' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.8 KiB/s wr, 106 op/s
Jan 30 23:43:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3120499821' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3120499821' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3091023644' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3091023644' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 6.3 KiB/s wr, 139 op/s
Jan 30 23:43:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.9 KiB/s wr, 104 op/s
Jan 30 23:43:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 30 23:43:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 30 23:43:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 30 23:43:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Jan 30 23:43:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.1 KiB/s wr, 57 op/s
Jan 30 23:43:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Jan 30 23:43:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1094983534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1094983534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2104050285' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2104050285' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/101643057' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/101643057' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:43:33.045 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:43:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:43:33.047 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:43:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 19 op/s
Jan 30 23:43:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 19 op/s
Jan 30 23:43:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:43:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:43:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:43:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:43:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:43:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:43:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 885 B/s wr, 18 op/s
Jan 30 23:43:38 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:43:38.050 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:43:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 596 B/s wr, 25 op/s
Jan 30 23:43:39 np0005603435 podman[245981]: 2026-01-31 04:43:39.118331974 +0000 UTC m=+0.086522634 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 30 23:43:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s
Jan 30 23:43:42 np0005603435 podman[246008]: 2026-01-31 04:43:42.099154122 +0000 UTC m=+0.069622471 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:43:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s
Jan 30 23:43:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:43:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/889867312' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:43:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:43:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/889867312' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:43:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 341 B/s wr, 11 op/s
Jan 30 23:43:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 341 B/s wr, 11 op/s
Jan 30 23:43:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 0 B/s wr, 8 op/s
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:50.702279) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834630702341, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 802, "num_deletes": 255, "total_data_size": 906903, "memory_usage": 922296, "flush_reason": "Manual Compaction"}
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834630846128, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 682954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18558, "largest_seqno": 19359, "table_properties": {"data_size": 679116, "index_size": 1554, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9716, "raw_average_key_size": 20, "raw_value_size": 671070, "raw_average_value_size": 1427, "num_data_blocks": 68, "num_entries": 470, "num_filter_entries": 470, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769834586, "oldest_key_time": 1769834586, "file_creation_time": 1769834630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 144003 microseconds, and 4505 cpu microseconds.
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:50.846211) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 682954 bytes OK
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:50.846314) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:50.908366) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:50.908424) EVENT_LOG_v1 {"time_micros": 1769834630908411, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:50.908454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 902762, prev total WAL file size 902762, number of live WAL files 2.
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:50.909357) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353034' seq:72057594037927935, type:22 .. '6D67727374617400373538' seq:0, type:0; will stop at (end)
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(666KB)], [41(9314KB)]
Jan 30 23:43:50 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834630909419, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10221359, "oldest_snapshot_seqno": -1}
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4476 keys, 7052702 bytes, temperature: kUnknown
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834631058183, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 7052702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7022604, "index_size": 17828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 109092, "raw_average_key_size": 24, "raw_value_size": 6941528, "raw_average_value_size": 1550, "num_data_blocks": 748, "num_entries": 4476, "num_filter_entries": 4476, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769834630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:43:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:51.058518) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 7052702 bytes
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:51.188415) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.6 rd, 47.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.1 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(25.3) write-amplify(10.3) OK, records in: 4984, records dropped: 508 output_compression: NoCompression
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:51.188471) EVENT_LOG_v1 {"time_micros": 1769834631188448, "job": 20, "event": "compaction_finished", "compaction_time_micros": 148894, "compaction_time_cpu_micros": 25398, "output_level": 6, "num_output_files": 1, "total_output_size": 7052702, "num_input_records": 4984, "num_output_records": 4476, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834631188792, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834631190381, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:50.909190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:51.190522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:51.190532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:51.190534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:51.190536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:43:51.190537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:43:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 30 23:43:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 30 23:43:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 30 23:43:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 204 B/s wr, 0 op/s
Jan 30 23:43:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 204 B/s wr, 0 op/s
Jan 30 23:43:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:43:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:43:55.908 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:43:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:43:55.908 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:43:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:43:55.908 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:43:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 30 23:43:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 30 23:43:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 30 23:43:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 1023 B/s wr, 10 op/s
Jan 30 23:43:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1023 B/s wr, 13 op/s
Jan 30 23:44:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3683666406' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3683666406' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.5 KiB/s wr, 38 op/s
Jan 30 23:44:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Jan 30 23:44:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Jan 30 23:44:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 30 23:44:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 30 23:44:05 np0005603435 nova_compute[239938]: 2026-01-31 04:44:05.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:44:06
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms']
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:44:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 921 B/s wr, 27 op/s
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:44:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:44:07 np0005603435 nova_compute[239938]: 2026-01-31 04:44:07.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:07 np0005603435 nova_compute[239938]: 2026-01-31 04:44:07.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:07 np0005603435 nova_compute[239938]: 2026-01-31 04:44:07.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:44:07 np0005603435 nova_compute[239938]: 2026-01-31 04:44:07.919 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:44:07 np0005603435 nova_compute[239938]: 2026-01-31 04:44:07.919 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:44:07 np0005603435 nova_compute[239938]: 2026-01-31 04:44:07.920 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:44:07 np0005603435 nova_compute[239938]: 2026-01-31 04:44:07.920 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:44:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:44:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581654620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:44:08 np0005603435 nova_compute[239938]: 2026-01-31 04:44:08.533 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:44:08 np0005603435 nova_compute[239938]: 2026-01-31 04:44:08.657 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:44:08 np0005603435 nova_compute[239938]: 2026-01-31 04:44:08.657 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:44:08 np0005603435 nova_compute[239938]: 2026-01-31 04:44:08.658 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:44:08 np0005603435 nova_compute[239938]: 2026-01-31 04:44:08.658 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:44:08 np0005603435 nova_compute[239938]: 2026-01-31 04:44:08.715 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:44:08 np0005603435 nova_compute[239938]: 2026-01-31 04:44:08.716 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:44:08 np0005603435 nova_compute[239938]: 2026-01-31 04:44:08.729 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:44:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 921 B/s wr, 25 op/s
Jan 30 23:44:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:44:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960151778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:44:09 np0005603435 nova_compute[239938]: 2026-01-31 04:44:09.260 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:44:09 np0005603435 nova_compute[239938]: 2026-01-31 04:44:09.266 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:44:09 np0005603435 nova_compute[239938]: 2026-01-31 04:44:09.288 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:44:09 np0005603435 nova_compute[239938]: 2026-01-31 04:44:09.289 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:44:09 np0005603435 nova_compute[239938]: 2026-01-31 04:44:09.290 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:44:10 np0005603435 podman[246071]: 2026-01-31 04:44:10.142274454 +0000 UTC m=+0.104260067 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:44:10 np0005603435 nova_compute[239938]: 2026-01-31 04:44:10.290 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:10 np0005603435 nova_compute[239938]: 2026-01-31 04:44:10.291 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:44:10 np0005603435 nova_compute[239938]: 2026-01-31 04:44:10.291 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:44:10 np0005603435 nova_compute[239938]: 2026-01-31 04:44:10.314 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:44:10 np0005603435 nova_compute[239938]: 2026-01-31 04:44:10.314 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:10 np0005603435 nova_compute[239938]: 2026-01-31 04:44:10.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442291043' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442291043' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:10 np0005603435 nova_compute[239938]: 2026-01-31 04:44:10.920 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 818 B/s wr, 5 op/s
Jan 30 23:44:11 np0005603435 nova_compute[239938]: 2026-01-31 04:44:11.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:11 np0005603435 nova_compute[239938]: 2026-01-31 04:44:11.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:11 np0005603435 nova_compute[239938]: 2026-01-31 04:44:11.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:44:12 np0005603435 nova_compute[239938]: 2026-01-31 04:44:12.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:44:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:44:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2158210805' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:44:13 np0005603435 podman[246100]: 2026-01-31 04:44:13.092333055 +0000 UTC m=+0.064456070 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 30 23:44:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.4 KiB/s wr, 22 op/s
Jan 30 23:44:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 30 23:44:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 30 23:44:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 30 23:44:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 KiB/s wr, 24 op/s
Jan 30 23:44:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 30 23:44:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 30 23:44:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 30 23:44:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792702250' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792702250' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1024343383' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1024343383' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2058212121' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2058212121' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.5557999717402896e-06 of space, bias 1.0, pg target 0.0007667399915220869 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665976152434327 of space, bias 1.0, pg target 0.1997928457302981 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.480261800003027e-07 of space, bias 4.0, pg target 0.0010176314160003632 quantized to 16 (current 16)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:44:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 6.0 KiB/s wr, 75 op/s
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:44:17 np0005603435 podman[246262]: 2026-01-31 04:44:17.786212758 +0000 UTC m=+0.102113792 container create e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mirzakhani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:44:17 np0005603435 podman[246262]: 2026-01-31 04:44:17.706860619 +0000 UTC m=+0.022761713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:44:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 30 23:44:17 np0005603435 systemd[1]: Started libpod-conmon-e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e.scope.
Jan 30 23:44:17 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:44:18 np0005603435 podman[246262]: 2026-01-31 04:44:18.02716903 +0000 UTC m=+0.343070124 container init e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mirzakhani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:44:18 np0005603435 podman[246262]: 2026-01-31 04:44:18.035527054 +0000 UTC m=+0.351428098 container start e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:44:18 np0005603435 objective_mirzakhani[246279]: 167 167
Jan 30 23:44:18 np0005603435 systemd[1]: libpod-e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e.scope: Deactivated successfully.
Jan 30 23:44:18 np0005603435 podman[246262]: 2026-01-31 04:44:18.146535763 +0000 UTC m=+0.462436777 container attach e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 30 23:44:18 np0005603435 podman[246262]: 2026-01-31 04:44:18.147016805 +0000 UTC m=+0.462917819 container died e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 30 23:44:18 np0005603435 systemd[1]: var-lib-containers-storage-overlay-2d181992269597249719f997ffda9320374a3058cc38aa4245ca8c4c8f9a3037-merged.mount: Deactivated successfully.
Jan 30 23:44:18 np0005603435 podman[246262]: 2026-01-31 04:44:18.439628628 +0000 UTC m=+0.755529622 container remove e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:44:18 np0005603435 systemd[1]: libpod-conmon-e4aa2895942618c1145b0c3bf73482c85fcc50f9efc1490522b04067d2117f3e.scope: Deactivated successfully.
Jan 30 23:44:18 np0005603435 podman[246303]: 2026-01-31 04:44:18.659491661 +0000 UTC m=+0.104281168 container create b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:44:18 np0005603435 podman[246303]: 2026-01-31 04:44:18.590281281 +0000 UTC m=+0.035070828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:44:18 np0005603435 systemd[1]: Started libpod-conmon-b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408.scope.
Jan 30 23:44:18 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:44:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ff2113a532fbe251ab2a1c05fa8e05af1887517e5d74f0da68422f241c4ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ff2113a532fbe251ab2a1c05fa8e05af1887517e5d74f0da68422f241c4ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ff2113a532fbe251ab2a1c05fa8e05af1887517e5d74f0da68422f241c4ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ff2113a532fbe251ab2a1c05fa8e05af1887517e5d74f0da68422f241c4ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8ff2113a532fbe251ab2a1c05fa8e05af1887517e5d74f0da68422f241c4ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:18 np0005603435 podman[246303]: 2026-01-31 04:44:18.818866146 +0000 UTC m=+0.263655663 container init b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goodall, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:44:18 np0005603435 podman[246303]: 2026-01-31 04:44:18.828278437 +0000 UTC m=+0.273067984 container start b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goodall, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 30 23:44:18 np0005603435 podman[246303]: 2026-01-31 04:44:18.844547653 +0000 UTC m=+0.289337220 container attach b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goodall, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:44:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3801036077' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3801036077' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.5 KiB/s wr, 119 op/s
Jan 30 23:44:19 np0005603435 happy_goodall[246319]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:44:19 np0005603435 happy_goodall[246319]: --> All data devices are unavailable
Jan 30 23:44:19 np0005603435 systemd[1]: libpod-b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408.scope: Deactivated successfully.
Jan 30 23:44:19 np0005603435 podman[246303]: 2026-01-31 04:44:19.342910758 +0000 UTC m=+0.787700305 container died b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goodall, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:44:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ac8ff2113a532fbe251ab2a1c05fa8e05af1887517e5d74f0da68422f241c4ed-merged.mount: Deactivated successfully.
Jan 30 23:44:19 np0005603435 podman[246303]: 2026-01-31 04:44:19.507865976 +0000 UTC m=+0.952655523 container remove b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goodall, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 30 23:44:19 np0005603435 systemd[1]: libpod-conmon-b4f6e4dc535a19dbdab52d14baadab633b6f4f217b45594745b25fa49b9a2408.scope: Deactivated successfully.
Jan 30 23:44:19 np0005603435 podman[246414]: 2026-01-31 04:44:19.974473058 +0000 UTC m=+0.087680063 container create f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bassi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:44:20 np0005603435 podman[246414]: 2026-01-31 04:44:19.903096223 +0000 UTC m=+0.016303208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:44:20 np0005603435 systemd[1]: Started libpod-conmon-f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55.scope.
Jan 30 23:44:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:44:20 np0005603435 podman[246414]: 2026-01-31 04:44:20.369941551 +0000 UTC m=+0.483148586 container init f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:44:20 np0005603435 podman[246414]: 2026-01-31 04:44:20.380015378 +0000 UTC m=+0.493222383 container start f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:44:20 np0005603435 pedantic_bassi[246430]: 167 167
Jan 30 23:44:20 np0005603435 systemd[1]: libpod-f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55.scope: Deactivated successfully.
Jan 30 23:44:20 np0005603435 podman[246414]: 2026-01-31 04:44:20.63035669 +0000 UTC m=+0.743563685 container attach f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:44:20 np0005603435 podman[246414]: 2026-01-31 04:44:20.631213032 +0000 UTC m=+0.744420037 container died f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:44:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.5 KiB/s wr, 137 op/s
Jan 30 23:44:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ce02e435cad62de9d39c62189f3615a2678fa95e02c15514b2524d1232717951-merged.mount: Deactivated successfully.
Jan 30 23:44:21 np0005603435 podman[246414]: 2026-01-31 04:44:21.690823129 +0000 UTC m=+1.804030124 container remove f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:44:21 np0005603435 systemd[1]: libpod-conmon-f24abfb843004043049f28c2a424b8b109b0c2b8f241d81af63768c5ef455f55.scope: Deactivated successfully.
Jan 30 23:44:21 np0005603435 podman[246454]: 2026-01-31 04:44:21.869814766 +0000 UTC m=+0.032739128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:44:21 np0005603435 podman[246454]: 2026-01-31 04:44:21.994753251 +0000 UTC m=+0.157677573 container create 9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_payne, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:44:22 np0005603435 systemd[1]: Started libpod-conmon-9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0.scope.
Jan 30 23:44:22 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:44:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ff89e3adf029f365079745bfe5dadbe1527fad5e55847dd58259ad6b14cfea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ff89e3adf029f365079745bfe5dadbe1527fad5e55847dd58259ad6b14cfea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ff89e3adf029f365079745bfe5dadbe1527fad5e55847dd58259ad6b14cfea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ff89e3adf029f365079745bfe5dadbe1527fad5e55847dd58259ad6b14cfea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:22 np0005603435 podman[246454]: 2026-01-31 04:44:22.271030727 +0000 UTC m=+0.433955059 container init 9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 30 23:44:22 np0005603435 podman[246454]: 2026-01-31 04:44:22.277454821 +0000 UTC m=+0.440379133 container start 9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_payne, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:44:22 np0005603435 podman[246454]: 2026-01-31 04:44:22.355409315 +0000 UTC m=+0.518333637 container attach 9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_payne, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:44:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/193175082' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/193175082' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:22 np0005603435 gracious_payne[246471]: {
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:    "0": [
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:        {
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "devices": [
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "/dev/loop3"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            ],
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_name": "ceph_lv0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_size": "21470642176",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "name": "ceph_lv0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "tags": {
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cluster_name": "ceph",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.crush_device_class": "",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.encrypted": "0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.objectstore": "bluestore",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osd_id": "0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.type": "block",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.vdo": "0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.with_tpm": "0"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            },
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "type": "block",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "vg_name": "ceph_vg0"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:        }
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:    ],
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:    "1": [
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:        {
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "devices": [
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "/dev/loop4"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            ],
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_name": "ceph_lv1",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_size": "21470642176",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "name": "ceph_lv1",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "tags": {
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cluster_name": "ceph",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.crush_device_class": "",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.encrypted": "0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.objectstore": "bluestore",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osd_id": "1",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.type": "block",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.vdo": "0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.with_tpm": "0"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            },
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "type": "block",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "vg_name": "ceph_vg1"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:        }
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:    ],
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:    "2": [
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:        {
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "devices": [
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "/dev/loop5"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            ],
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_name": "ceph_lv2",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_size": "21470642176",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "name": "ceph_lv2",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "tags": {
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.cluster_name": "ceph",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.crush_device_class": "",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.encrypted": "0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.objectstore": "bluestore",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osd_id": "2",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.type": "block",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.vdo": "0",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:                "ceph.with_tpm": "0"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            },
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "type": "block",
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:            "vg_name": "ceph_vg2"
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:        }
Jan 30 23:44:22 np0005603435 gracious_payne[246471]:    ]
Jan 30 23:44:22 np0005603435 gracious_payne[246471]: }
Jan 30 23:44:22 np0005603435 systemd[1]: libpod-9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0.scope: Deactivated successfully.
Jan 30 23:44:22 np0005603435 podman[246454]: 2026-01-31 04:44:22.561605728 +0000 UTC m=+0.724530010 container died 9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:44:22 np0005603435 systemd[1]: var-lib-containers-storage-overlay-49ff89e3adf029f365079745bfe5dadbe1527fad5e55847dd58259ad6b14cfea-merged.mount: Deactivated successfully.
Jan 30 23:44:23 np0005603435 podman[246454]: 2026-01-31 04:44:23.047854583 +0000 UTC m=+1.210778905 container remove 9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_payne, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:44:23 np0005603435 systemd[1]: libpod-conmon-9e4bd74d49a351b0eb12bc4839c3af0770b493d76a73a30f24c9b01cc7a974c0.scope: Deactivated successfully.
Jan 30 23:44:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.4 KiB/s wr, 116 op/s
Jan 30 23:44:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:44:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1804138778' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:44:23 np0005603435 podman[246556]: 2026-01-31 04:44:23.595413305 +0000 UTC m=+0.099609128 container create d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:44:23 np0005603435 podman[246556]: 2026-01-31 04:44:23.523097976 +0000 UTC m=+0.027293819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:44:23 np0005603435 systemd[1]: Started libpod-conmon-d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a.scope.
Jan 30 23:44:23 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:44:23 np0005603435 podman[246556]: 2026-01-31 04:44:23.798509389 +0000 UTC m=+0.302705232 container init d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:44:23 np0005603435 podman[246556]: 2026-01-31 04:44:23.806408061 +0000 UTC m=+0.310603884 container start d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:44:23 np0005603435 fervent_buck[246572]: 167 167
Jan 30 23:44:23 np0005603435 systemd[1]: libpod-d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a.scope: Deactivated successfully.
Jan 30 23:44:24 np0005603435 podman[246556]: 2026-01-31 04:44:24.085762114 +0000 UTC m=+0.589958007 container attach d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:44:24 np0005603435 podman[246556]: 2026-01-31 04:44:24.086587885 +0000 UTC m=+0.590783758 container died d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 30 23:44:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 30 23:44:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 30 23:44:24 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5b4772a5193ec9725737727ea6db314810ad885d4a3580253892068e7efa748f-merged.mount: Deactivated successfully.
Jan 30 23:44:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 30 23:44:24 np0005603435 podman[246556]: 2026-01-31 04:44:24.940351138 +0000 UTC m=+1.444546961 container remove d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:44:24 np0005603435 systemd[1]: libpod-conmon-d07268592c9a67262fb183466791001b50107fcb9feca8fe10a73175b520ea2a.scope: Deactivated successfully.
Jan 30 23:44:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 2.4 KiB/s wr, 85 op/s
Jan 30 23:44:25 np0005603435 podman[246597]: 2026-01-31 04:44:25.094348856 +0000 UTC m=+0.032657016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:44:25 np0005603435 podman[246597]: 2026-01-31 04:44:25.194443415 +0000 UTC m=+0.132751535 container create 24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 30 23:44:25 np0005603435 systemd[1]: Started libpod-conmon-24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb.scope.
Jan 30 23:44:25 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:44:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ddfbf685bd9164059b1f6be332634352b6d668dfd8370c2d47340b9c28739c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ddfbf685bd9164059b1f6be332634352b6d668dfd8370c2d47340b9c28739c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ddfbf685bd9164059b1f6be332634352b6d668dfd8370c2d47340b9c28739c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ddfbf685bd9164059b1f6be332634352b6d668dfd8370c2d47340b9c28739c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:44:25 np0005603435 podman[246597]: 2026-01-31 04:44:25.653827064 +0000 UTC m=+0.592135234 container init 24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:44:25 np0005603435 podman[246597]: 2026-01-31 04:44:25.662444024 +0000 UTC m=+0.600752144 container start 24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:44:25 np0005603435 podman[246597]: 2026-01-31 04:44:25.724524632 +0000 UTC m=+0.662832752 container attach 24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:44:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 30 23:44:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 30 23:44:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 30 23:44:26 np0005603435 lvm[246694]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:44:26 np0005603435 lvm[246694]: VG ceph_vg2 finished
Jan 30 23:44:26 np0005603435 lvm[246692]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:44:26 np0005603435 lvm[246692]: VG ceph_vg1 finished
Jan 30 23:44:26 np0005603435 lvm[246691]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:44:26 np0005603435 lvm[246691]: VG ceph_vg0 finished
Jan 30 23:44:26 np0005603435 thirsty_hodgkin[246613]: {}
Jan 30 23:44:26 np0005603435 systemd[1]: libpod-24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb.scope: Deactivated successfully.
Jan 30 23:44:26 np0005603435 systemd[1]: libpod-24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb.scope: Consumed 1.136s CPU time.
Jan 30 23:44:26 np0005603435 podman[246597]: 2026-01-31 04:44:26.423870476 +0000 UTC m=+1.362178566 container died 24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:44:26 np0005603435 systemd[1]: var-lib-containers-storage-overlay-69ddfbf685bd9164059b1f6be332634352b6d668dfd8370c2d47340b9c28739c-merged.mount: Deactivated successfully.
Jan 30 23:44:26 np0005603435 podman[246597]: 2026-01-31 04:44:26.957138474 +0000 UTC m=+1.895446594 container remove 24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:44:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:44:27 np0005603435 systemd[1]: libpod-conmon-24a0e822c833f3787d55792ac8d92146c317d5a3ab00c8e907385199fcd51afb.scope: Deactivated successfully.
Jan 30 23:44:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:44:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:44:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.6 KiB/s wr, 73 op/s
Jan 30 23:44:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:44:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 30 23:44:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:44:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:44:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 30 23:44:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 30 23:44:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.3 KiB/s wr, 64 op/s
Jan 30 23:44:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 30 23:44:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 30 23:44:29 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3668483179' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3668483179' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:30.957499) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834670957556, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 772, "num_deletes": 265, "total_data_size": 827220, "memory_usage": 843160, "flush_reason": "Manual Compaction"}
Jan 30 23:44:30 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834671039632, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 817805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19360, "largest_seqno": 20131, "table_properties": {"data_size": 813825, "index_size": 1696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9043, "raw_average_key_size": 18, "raw_value_size": 805510, "raw_average_value_size": 1681, "num_data_blocks": 75, "num_entries": 479, "num_filter_entries": 479, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769834631, "oldest_key_time": 1769834631, "file_creation_time": 1769834670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 82178 microseconds, and 2119 cpu microseconds.
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.039683) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 817805 bytes OK
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.039703) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.053596) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.053636) EVENT_LOG_v1 {"time_micros": 1769834671053614, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.053659) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 823171, prev total WAL file size 823171, number of live WAL files 2.
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.054274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353131' seq:0, type:0; will stop at (end)
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(798KB)], [44(6887KB)]
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834671054320, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7870507, "oldest_snapshot_seqno": -1}
Jan 30 23:44:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1.3 KiB/s wr, 66 op/s
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4410 keys, 7742186 bytes, temperature: kUnknown
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834671174767, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7742186, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7711053, "index_size": 18991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109059, "raw_average_key_size": 24, "raw_value_size": 7629679, "raw_average_value_size": 1730, "num_data_blocks": 793, "num_entries": 4410, "num_filter_entries": 4410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769834671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2962168865' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2962168865' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.175087) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7742186 bytes
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.184891) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.3 rd, 64.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(19.1) write-amplify(9.5) OK, records in: 4955, records dropped: 545 output_compression: NoCompression
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.184921) EVENT_LOG_v1 {"time_micros": 1769834671184907, "job": 22, "event": "compaction_finished", "compaction_time_micros": 120555, "compaction_time_cpu_micros": 15830, "output_level": 6, "num_output_files": 1, "total_output_size": 7742186, "num_input_records": 4955, "num_output_records": 4410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834671185217, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834671186633, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.054180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.186852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.186859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.186862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.186865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:44:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:44:31.186869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:44:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63213893' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63213893' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 30 23:44:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 30 23:44:33 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 30 23:44:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 3.7 KiB/s wr, 125 op/s
Jan 30 23:44:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/751503353' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/751503353' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.9 KiB/s wr, 87 op/s
Jan 30 23:44:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 30 23:44:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 30 23:44:35 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 30 23:44:35 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:44:35.914 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:44:35 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:44:35.916 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:44:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:44:36.919 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:44:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:44:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:44:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:44:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:44:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:44:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:44:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:44:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/323055180' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:44:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 4.1 KiB/s wr, 102 op/s
Jan 30 23:44:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 30 23:44:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 30 23:44:37 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 30 23:44:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 30 23:44:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 30 23:44:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 30 23:44:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Jan 30 23:44:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 30 23:44:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 30 23:44:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 30 23:44:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 30 23:44:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 30 23:44:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 30 23:44:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 5.5 KiB/s wr, 79 op/s
Jan 30 23:44:41 np0005603435 podman[246734]: 2026-01-31 04:44:41.138988847 +0000 UTC m=+0.103438296 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:44:41 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2247200891' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2247200891' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2168629660' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2168629660' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 30 23:44:42 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 30 23:44:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3010835707' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3010835707' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 6.4 KiB/s wr, 106 op/s
Jan 30 23:44:44 np0005603435 podman[246760]: 2026-01-31 04:44:44.108213527 +0000 UTC m=+0.064902141 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 30 23:44:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.8 KiB/s wr, 79 op/s
Jan 30 23:44:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 4.2 KiB/s wr, 134 op/s
Jan 30 23:44:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.3 KiB/s wr, 106 op/s
Jan 30 23:44:50 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 30 23:44:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 30 23:44:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 30 23:44:50 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 30 23:44:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.9 KiB/s wr, 77 op/s
Jan 30 23:44:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1719102074' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1719102074' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:44:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1777546199' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:44:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:44:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1777546199' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:44:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 2.2 KiB/s wr, 89 op/s
Jan 30 23:44:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 2.4 KiB/s wr, 92 op/s
Jan 30 23:44:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:44:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:44:55.909 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:44:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:44:55.910 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:44:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:44:55.910 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:44:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.5 KiB/s wr, 40 op/s
Jan 30 23:44:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 KiB/s wr, 43 op/s
Jan 30 23:45:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 30 23:45:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 30 23:45:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 30 23:45:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050459620' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050459620' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 47 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 657 KiB/s wr, 48 op/s
Jan 30 23:45:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 30 23:45:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 30 23:45:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 30 23:45:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 47 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 115 op/s
Jan 30 23:45:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 30 23:45:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 30 23:45:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 30 23:45:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 30 23:45:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 30 23:45:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 30 23:45:04 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 30 23:45:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3.0 MiB/s wr, 160 op/s
Jan 30 23:45:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705791041' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705791041' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:05 np0005603435 nova_compute[239938]: 2026-01-31 04:45:05.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:45:06
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', '.rgw.root', 'backups']
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:45:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 2.5 MiB/s wr, 185 op/s
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:45:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:45:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 30 23:45:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 30 23:45:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.886 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.886 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.906 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.907 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.907 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.944 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.944 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.945 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.945 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:45:08 np0005603435 nova_compute[239938]: 2026-01-31 04:45:08.945 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.2 KiB/s wr, 71 op/s
Jan 30 23:45:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:45:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788719888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:45:09 np0005603435 nova_compute[239938]: 2026-01-31 04:45:09.549 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:09 np0005603435 nova_compute[239938]: 2026-01-31 04:45:09.711 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:45:09 np0005603435 nova_compute[239938]: 2026-01-31 04:45:09.712 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5100MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:45:09 np0005603435 nova_compute[239938]: 2026-01-31 04:45:09.712 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:09 np0005603435 nova_compute[239938]: 2026-01-31 04:45:09.713 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:09 np0005603435 nova_compute[239938]: 2026-01-31 04:45:09.784 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:45:09 np0005603435 nova_compute[239938]: 2026-01-31 04:45:09.785 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:45:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100174914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100174914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:09 np0005603435 nova_compute[239938]: 2026-01-31 04:45:09.805 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:45:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868293054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:45:10 np0005603435 nova_compute[239938]: 2026-01-31 04:45:10.356 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:10 np0005603435 nova_compute[239938]: 2026-01-31 04:45:10.362 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:45:10 np0005603435 nova_compute[239938]: 2026-01-31 04:45:10.380 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:45:10 np0005603435 nova_compute[239938]: 2026-01-31 04:45:10.383 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:45:10 np0005603435 nova_compute[239938]: 2026-01-31 04:45:10.384 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 30 23:45:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 30 23:45:10 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 30 23:45:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 71 op/s
Jan 30 23:45:12 np0005603435 podman[246825]: 2026-01-31 04:45:12.117272242 +0000 UTC m=+0.084689937 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:45:12 np0005603435 nova_compute[239938]: 2026-01-31 04:45:12.364 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:12 np0005603435 nova_compute[239938]: 2026-01-31 04:45:12.365 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:12 np0005603435 nova_compute[239938]: 2026-01-31 04:45:12.365 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:45:12 np0005603435 nova_compute[239938]: 2026-01-31 04:45:12.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 30 23:45:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 30 23:45:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 30 23:45:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.7 KiB/s wr, 120 op/s
Jan 30 23:45:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2212744237' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2212744237' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:13 np0005603435 nova_compute[239938]: 2026-01-31 04:45:13.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:45:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/264429726' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/264429726' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:15 np0005603435 podman[246851]: 2026-01-31 04:45:15.102949143 +0000 UTC m=+0.061488334 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 30 23:45:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.1 KiB/s wr, 132 op/s
Jan 30 23:45:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:16 np0005603435 nova_compute[239938]: 2026-01-31 04:45:16.893 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:16 np0005603435 nova_compute[239938]: 2026-01-31 04:45:16.894 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:16 np0005603435 nova_compute[239938]: 2026-01-31 04:45:16.915 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.064 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.065 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.072 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.073 239942 INFO nova.compute.claims [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.2419772432862202e-06 of space, bias 1.0, pg target 0.0009725931729858661 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.9730818010039447e-07 of space, bias 1.0, pg target 5.9192454030118344e-05 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00066597595060835 of space, bias 1.0, pg target 0.199792785182505 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.19723967999121e-07 of space, bias 4.0, pg target 0.0009836687615989452 quantized to 16 (current 16)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:45:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 5.2 KiB/s wr, 119 op/s
Jan 30 23:45:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1612455900' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1612455900' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.185 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:45:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114388099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.791 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.797 239942 DEBUG nova.compute.provider_tree [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.818 239942 DEBUG nova.scheduler.client.report [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.845 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.846 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.960 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.961 239942 DEBUG nova.network.neutron [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:45:17 np0005603435 nova_compute[239938]: 2026-01-31 04:45:17.989 239942 INFO nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.008 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.090 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.092 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.092 239942 INFO nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Creating image(s)#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.122 239942 DEBUG nova.storage.rbd_utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] rbd image a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.152 239942 DEBUG nova.storage.rbd_utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] rbd image a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.180 239942 DEBUG nova.storage.rbd_utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] rbd image a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.184 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.186 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.648 239942 DEBUG nova.virt.libvirt.imagebackend [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Image locations are: [{'url': 'rbd://95d2f419-0dd0-56f2-a094-353f8c7597ed/images/bf004ad8-fb70-4caa-9170-9f02e22d687d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://95d2f419-0dd0-56f2-a094-353f8c7597ed/images/bf004ad8-fb70-4caa-9170-9f02e22d687d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.677 239942 WARNING oslo_policy.policy [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.678 239942 WARNING oslo_policy.policy [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 30 23:45:18 np0005603435 nova_compute[239938]: 2026-01-31 04:45:18.682 239942 DEBUG nova.policy [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b64529ecf0d54f718c07683e4fe74bc1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '293dba0b4ad14f1cb4a3b761ad5fd07a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:45:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 4.6 KiB/s wr, 131 op/s
Jan 30 23:45:19 np0005603435 nova_compute[239938]: 2026-01-31 04:45:19.902 239942 DEBUG nova.network.neutron [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Successfully created port: e2a2845b-61f2-4c1a-ab7e-89ce08066e21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:45:20 np0005603435 nova_compute[239938]: 2026-01-31 04:45:20.365 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:20 np0005603435 nova_compute[239938]: 2026-01-31 04:45:20.415 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4.part --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:20 np0005603435 nova_compute[239938]: 2026-01-31 04:45:20.416 239942 DEBUG nova.virt.images [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] bf004ad8-fb70-4caa-9170-9f02e22d687d was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 30 23:45:20 np0005603435 nova_compute[239938]: 2026-01-31 04:45:20.433 239942 DEBUG nova.privsep.utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 30 23:45:20 np0005603435 nova_compute[239938]: 2026-01-31 04:45:20.434 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4.part /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 30 23:45:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 30 23:45:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 543 KiB/s rd, 3.8 KiB/s wr, 117 op/s
Jan 30 23:45:21 np0005603435 nova_compute[239938]: 2026-01-31 04:45:21.168 239942 DEBUG nova.network.neutron [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Successfully updated port: e2a2845b-61f2-4c1a-ab7e-89ce08066e21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:45:21 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 30 23:45:21 np0005603435 nova_compute[239938]: 2026-01-31 04:45:21.188 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "refresh_cache-a4cae87c-b7f1-42ce-836c-8effc2fd4de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:45:21 np0005603435 nova_compute[239938]: 2026-01-31 04:45:21.189 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquired lock "refresh_cache-a4cae87c-b7f1-42ce-836c-8effc2fd4de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:45:21 np0005603435 nova_compute[239938]: 2026-01-31 04:45:21.190 239942 DEBUG nova.network.neutron [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:45:21 np0005603435 nova_compute[239938]: 2026-01-31 04:45:21.461 239942 DEBUG nova.network.neutron [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:45:21 np0005603435 nova_compute[239938]: 2026-01-31 04:45:21.764 239942 DEBUG nova.compute.manager [req-cd4b75d1-d7dc-4754-b62c-7de9a9e55ea3 req-539aad73-622c-4882-9b04-40284dac65dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received event network-changed-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:21 np0005603435 nova_compute[239938]: 2026-01-31 04:45:21.765 239942 DEBUG nova.compute.manager [req-cd4b75d1-d7dc-4754-b62c-7de9a9e55ea3 req-539aad73-622c-4882-9b04-40284dac65dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Refreshing instance network info cache due to event network-changed-e2a2845b-61f2-4c1a-ab7e-89ce08066e21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:45:21 np0005603435 nova_compute[239938]: 2026-01-31 04:45:21.765 239942 DEBUG oslo_concurrency.lockutils [req-cd4b75d1-d7dc-4754-b62c-7de9a9e55ea3 req-539aad73-622c-4882-9b04-40284dac65dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-a4cae87c-b7f1-42ce-836c-8effc2fd4de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:45:22 np0005603435 nova_compute[239938]: 2026-01-31 04:45:22.251 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4.part /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4.converted" returned: 0 in 1.817s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:22 np0005603435 nova_compute[239938]: 2026-01-31 04:45:22.255 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:22 np0005603435 nova_compute[239938]: 2026-01-31 04:45:22.331 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:22 np0005603435 nova_compute[239938]: 2026-01-31 04:45:22.333 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:22 np0005603435 nova_compute[239938]: 2026-01-31 04:45:22.359 239942 DEBUG nova.storage.rbd_utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] rbd image a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:22 np0005603435 nova_compute[239938]: 2026-01-31 04:45:22.364 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 KiB/s wr, 61 op/s
Jan 30 23:45:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 30 23:45:23 np0005603435 nova_compute[239938]: 2026-01-31 04:45:23.371 239942 DEBUG nova.network.neutron [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Updating instance_info_cache with network_info: [{"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:45:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 30 23:45:23 np0005603435 nova_compute[239938]: 2026-01-31 04:45:23.393 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Releasing lock "refresh_cache-a4cae87c-b7f1-42ce-836c-8effc2fd4de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:45:23 np0005603435 nova_compute[239938]: 2026-01-31 04:45:23.394 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Instance network_info: |[{"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:45:23 np0005603435 nova_compute[239938]: 2026-01-31 04:45:23.394 239942 DEBUG oslo_concurrency.lockutils [req-cd4b75d1-d7dc-4754-b62c-7de9a9e55ea3 req-539aad73-622c-4882-9b04-40284dac65dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-a4cae87c-b7f1-42ce-836c-8effc2fd4de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:45:23 np0005603435 nova_compute[239938]: 2026-01-31 04:45:23.395 239942 DEBUG nova.network.neutron [req-cd4b75d1-d7dc-4754-b62c-7de9a9e55ea3 req-539aad73-622c-4882-9b04-40284dac65dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Refreshing network info cache for port e2a2845b-61f2-4c1a-ab7e-89ce08066e21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:45:23 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 30 23:45:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 30 23:45:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 30 23:45:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 30 23:45:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 50 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 565 KiB/s wr, 37 op/s
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.262 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.898s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.337 239942 DEBUG nova.storage.rbd_utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] resizing rbd image a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.398 239942 DEBUG nova.network.neutron [req-cd4b75d1-d7dc-4754-b62c-7de9a9e55ea3 req-539aad73-622c-4882-9b04-40284dac65dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Updated VIF entry in instance network info cache for port e2a2845b-61f2-4c1a-ab7e-89ce08066e21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.399 239942 DEBUG nova.network.neutron [req-cd4b75d1-d7dc-4754-b62c-7de9a9e55ea3 req-539aad73-622c-4882-9b04-40284dac65dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Updating instance_info_cache with network_info: [{"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.436 239942 DEBUG oslo_concurrency.lockutils [req-cd4b75d1-d7dc-4754-b62c-7de9a9e55ea3 req-539aad73-622c-4882-9b04-40284dac65dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-a4cae87c-b7f1-42ce-836c-8effc2fd4de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.695 239942 DEBUG nova.objects.instance [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lazy-loading 'migration_context' on Instance uuid a4cae87c-b7f1-42ce-836c-8effc2fd4de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.714 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.714 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Ensure instance console log exists: /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.715 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.715 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.716 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.720 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Start _get_guest_xml network_info=[{"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.726 239942 WARNING nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.730 239942 DEBUG nova.virt.libvirt.host [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.731 239942 DEBUG nova.virt.libvirt.host [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.735 239942 DEBUG nova.virt.libvirt.host [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.736 239942 DEBUG nova.virt.libvirt.host [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.737 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.737 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.738 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.738 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.739 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.740 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.741 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.741 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.742 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.742 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.743 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.743 239942 DEBUG nova.virt.hardware [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.753 239942 DEBUG nova.privsep.utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 30 23:45:25 np0005603435 nova_compute[239938]: 2026-01-31 04:45:25.754 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3597103012' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:45:26 np0005603435 nova_compute[239938]: 2026-01-31 04:45:26.315 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:26 np0005603435 nova_compute[239938]: 2026-01-31 04:45:26.339 239942 DEBUG nova.storage.rbd_utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] rbd image a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:26 np0005603435 nova_compute[239938]: 2026-01-31 04:45:26.344 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3806786799' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3806786799' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:45:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218825686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:45:26 np0005603435 nova_compute[239938]: 2026-01-31 04:45:26.904 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:26 np0005603435 nova_compute[239938]: 2026-01-31 04:45:26.906 239942 DEBUG nova.virt.libvirt.vif [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:45:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-255133828',display_name='tempest-VolumesActionsTest-instance-255133828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-255133828',id=1,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='293dba0b4ad14f1cb4a3b761ad5fd07a',ramdisk_id='',reservation_id='r-hsyfrpbb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-740629801',owner_user_name='tempest-VolumesActionsTest-740629801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:45:18Z,user_data=None,user_id='b64529ecf0d54f718c07683e4fe74bc1',uuid=a4cae87c-b7f1-42ce-836c-8effc2fd4de5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:45:26 np0005603435 nova_compute[239938]: 2026-01-31 04:45:26.906 239942 DEBUG nova.network.os_vif_util [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Converting VIF {"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:45:26 np0005603435 nova_compute[239938]: 2026-01-31 04:45:26.907 239942 DEBUG nova.network.os_vif_util [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a8:25,bridge_name='br-int',has_traffic_filtering=True,id=e2a2845b-61f2-4c1a-ab7e-89ce08066e21,network=Network(10e8924d-47c6-46fb-ba57-a83ece22f2a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2a2845b-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:45:26 np0005603435 nova_compute[239938]: 2026-01-31 04:45:26.909 239942 DEBUG nova.objects.instance [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lazy-loading 'pci_devices' on Instance uuid a4cae87c-b7f1-42ce-836c-8effc2fd4de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:45:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 66 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 45 op/s
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.468 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <uuid>a4cae87c-b7f1-42ce-836c-8effc2fd4de5</uuid>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <name>instance-00000001</name>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesActionsTest-instance-255133828</nova:name>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:45:25</nova:creationTime>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <nova:user uuid="b64529ecf0d54f718c07683e4fe74bc1">tempest-VolumesActionsTest-740629801-project-member</nova:user>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <nova:project uuid="293dba0b4ad14f1cb4a3b761ad5fd07a">tempest-VolumesActionsTest-740629801</nova:project>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <nova:port uuid="e2a2845b-61f2-4c1a-ab7e-89ce08066e21">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <entry name="serial">a4cae87c-b7f1-42ce-836c-8effc2fd4de5</entry>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <entry name="uuid">a4cae87c-b7f1-42ce-836c-8effc2fd4de5</entry>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk.config">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:b0:a8:25"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <target dev="tape2a2845b-61"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5/console.log" append="off"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:45:27 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:45:27 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:45:27 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:45:27 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.469 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Preparing to wait for external event network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.469 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.469 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.470 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.471 239942 DEBUG nova.virt.libvirt.vif [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:45:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-255133828',display_name='tempest-VolumesActionsTest-instance-255133828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-255133828',id=1,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='293dba0b4ad14f1cb4a3b761ad5fd07a',ramdisk_id='',reservation_id='r-hsyfrpbb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-740629801',owner_user_name='tempest-VolumesActionsTest-740629801-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:45:18Z,user_data=None,user_id='b64529ecf0d54f718c07683e4fe74bc1',uuid=a4cae87c-b7f1-42ce-836c-8effc2fd4de5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.472 239942 DEBUG nova.network.os_vif_util [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Converting VIF {"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.472 239942 DEBUG nova.network.os_vif_util [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a8:25,bridge_name='br-int',has_traffic_filtering=True,id=e2a2845b-61f2-4c1a-ab7e-89ce08066e21,network=Network(10e8924d-47c6-46fb-ba57-a83ece22f2a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2a2845b-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.473 239942 DEBUG os_vif [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a8:25,bridge_name='br-int',has_traffic_filtering=True,id=e2a2845b-61f2-4c1a-ab7e-89ce08066e21,network=Network(10e8924d-47c6-46fb-ba57-a83ece22f2a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2a2845b-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.500 239942 DEBUG ovsdbapp.backend.ovs_idl [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.500 239942 DEBUG ovsdbapp.backend.ovs_idl [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.500 239942 DEBUG ovsdbapp.backend.ovs_idl [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.501 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.501 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.501 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.502 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.513 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.513 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.514 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:45:27 np0005603435 nova_compute[239938]: 2026-01-31 04:45:27.515 239942 INFO oslo.privsep.daemon [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpeu1s80u5/privsep.sock']#033[00m
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:45:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.235 239942 INFO oslo.privsep.daemon [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.114 247267 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.119 247267 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.125 247267 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.125 247267 INFO oslo.privsep.daemon [-] privsep daemon running as pid 247267#033[00m
Jan 30 23:45:28 np0005603435 podman[247279]: 2026-01-31 04:45:28.24568948 +0000 UTC m=+0.046222871 container create 722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:45:28 np0005603435 systemd[1]: Started libpod-conmon-722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc.scope.
Jan 30 23:45:28 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:45:28 np0005603435 podman[247279]: 2026-01-31 04:45:28.219171127 +0000 UTC m=+0.019704598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:45:28 np0005603435 podman[247279]: 2026-01-31 04:45:28.326864427 +0000 UTC m=+0.127397828 container init 722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:45:28 np0005603435 podman[247279]: 2026-01-31 04:45:28.334491632 +0000 UTC m=+0.135025063 container start 722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_varahamihira, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:45:28 np0005603435 podman[247279]: 2026-01-31 04:45:28.338444388 +0000 UTC m=+0.138977809 container attach 722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_varahamihira, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:45:28 np0005603435 exciting_varahamihira[247296]: 167 167
Jan 30 23:45:28 np0005603435 systemd[1]: libpod-722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc.scope: Deactivated successfully.
Jan 30 23:45:28 np0005603435 podman[247279]: 2026-01-31 04:45:28.340691002 +0000 UTC m=+0.141224433 container died 722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 30 23:45:28 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d22bea9363ab9287b2016b0b9257083994ea13a92781c80d2cfb2e5ada5eae00-merged.mount: Deactivated successfully.
Jan 30 23:45:28 np0005603435 podman[247279]: 2026-01-31 04:45:28.400689665 +0000 UTC m=+0.201223056 container remove 722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_varahamihira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:45:28 np0005603435 systemd[1]: libpod-conmon-722a6eb31b8593b4ee8bb4c31e5c913a187c0773b24fe09682308d28aef63ffc.scope: Deactivated successfully.
Jan 30 23:45:28 np0005603435 podman[247322]: 2026-01-31 04:45:28.536344493 +0000 UTC m=+0.036905726 container create f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:45:28 np0005603435 systemd[1]: Started libpod-conmon-f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a.scope.
Jan 30 23:45:28 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:45:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402a442b29eed0fc838a7bed2319ea6a11671cafd76b04dc74a753047fdd0961/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402a442b29eed0fc838a7bed2319ea6a11671cafd76b04dc74a753047fdd0961/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402a442b29eed0fc838a7bed2319ea6a11671cafd76b04dc74a753047fdd0961/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402a442b29eed0fc838a7bed2319ea6a11671cafd76b04dc74a753047fdd0961/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402a442b29eed0fc838a7bed2319ea6a11671cafd76b04dc74a753047fdd0961/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:28 np0005603435 podman[247322]: 2026-01-31 04:45:28.521429331 +0000 UTC m=+0.021990604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.620 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.621 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2a2845b-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.622 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape2a2845b-61, col_values=(('external_ids', {'iface-id': 'e2a2845b-61f2-4c1a-ab7e-89ce08066e21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:a8:25', 'vm-uuid': 'a4cae87c-b7f1-42ce-836c-8effc2fd4de5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:28 np0005603435 NetworkManager[49097]: <info>  [1769834728.6260] manager: (tape2a2845b-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.628 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.632 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.634 239942 INFO os_vif [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a8:25,bridge_name='br-int',has_traffic_filtering=True,id=e2a2845b-61f2-4c1a-ab7e-89ce08066e21,network=Network(10e8924d-47c6-46fb-ba57-a83ece22f2a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2a2845b-61')#033[00m
Jan 30 23:45:28 np0005603435 podman[247322]: 2026-01-31 04:45:28.647482316 +0000 UTC m=+0.148043619 container init f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:45:28 np0005603435 podman[247322]: 2026-01-31 04:45:28.655430819 +0000 UTC m=+0.155992082 container start f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldwasser, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 30 23:45:28 np0005603435 podman[247322]: 2026-01-31 04:45:28.683812536 +0000 UTC m=+0.184373799 container attach f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldwasser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.710 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.711 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.711 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] No VIF found with MAC fa:16:3e:b0:a8:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.712 239942 INFO nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Using config drive#033[00m
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.794 239942 DEBUG nova.storage.rbd_utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] rbd image a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:45:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:45:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:45:28 np0005603435 nova_compute[239938]: 2026-01-31 04:45:28.985 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:29 np0005603435 quizzical_goldwasser[247338]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:45:29 np0005603435 quizzical_goldwasser[247338]: --> All data devices are unavailable
Jan 30 23:45:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 56 op/s
Jan 30 23:45:29 np0005603435 systemd[1]: libpod-f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a.scope: Deactivated successfully.
Jan 30 23:45:29 np0005603435 podman[247322]: 2026-01-31 04:45:29.189679406 +0000 UTC m=+0.690240639 container died f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:45:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-402a442b29eed0fc838a7bed2319ea6a11671cafd76b04dc74a753047fdd0961-merged.mount: Deactivated successfully.
Jan 30 23:45:29 np0005603435 podman[247322]: 2026-01-31 04:45:29.232392511 +0000 UTC m=+0.732953744 container remove f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 30 23:45:29 np0005603435 systemd[1]: libpod-conmon-f09f24216ac68ce4d20712fe06e22522e837c1e992af114f6005e86d445e658a.scope: Deactivated successfully.
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.496 239942 INFO nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Creating config drive at /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5/disk.config#033[00m
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.501 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprbrrnjs1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.639 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprbrrnjs1" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.659 239942 DEBUG nova.storage.rbd_utils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] rbd image a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.662 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5/disk.config a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:29 np0005603435 podman[247463]: 2026-01-31 04:45:29.697466721 +0000 UTC m=+0.042301696 container create 3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 30 23:45:29 np0005603435 systemd[1]: Started libpod-conmon-3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2.scope.
Jan 30 23:45:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:45:29 np0005603435 podman[247463]: 2026-01-31 04:45:29.675820217 +0000 UTC m=+0.020655242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:45:29 np0005603435 podman[247463]: 2026-01-31 04:45:29.782288587 +0000 UTC m=+0.127123542 container init 3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 30 23:45:29 np0005603435 podman[247463]: 2026-01-31 04:45:29.789132743 +0000 UTC m=+0.133967718 container start 3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lovelace, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:45:29 np0005603435 vigilant_lovelace[247506]: 167 167
Jan 30 23:45:29 np0005603435 systemd[1]: libpod-3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2.scope: Deactivated successfully.
Jan 30 23:45:29 np0005603435 podman[247463]: 2026-01-31 04:45:29.794593295 +0000 UTC m=+0.139428230 container attach 3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lovelace, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:45:29 np0005603435 conmon[247506]: conmon 3b6479ee25625f927880 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2.scope/container/memory.events
Jan 30 23:45:29 np0005603435 podman[247463]: 2026-01-31 04:45:29.795992719 +0000 UTC m=+0.140827654 container died 3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.808 239942 DEBUG oslo_concurrency.processutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5/disk.config a4cae87c-b7f1-42ce-836c-8effc2fd4de5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.813 239942 INFO nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Deleting local config drive /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5/disk.config because it was imported into RBD.#033[00m
Jan 30 23:45:29 np0005603435 systemd[1]: Starting libvirt secret daemon...
Jan 30 23:45:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f9deca8f7320bf3f466c73c4da5ff237bff86bcbf4d146df39da26439428e2c7-merged.mount: Deactivated successfully.
Jan 30 23:45:29 np0005603435 podman[247463]: 2026-01-31 04:45:29.846406541 +0000 UTC m=+0.191241476 container remove 3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:45:29 np0005603435 systemd[1]: libpod-conmon-3b6479ee25625f9278803d82d9bcf6243a2d7a56148fd0aa85fbf820984ecae2.scope: Deactivated successfully.
Jan 30 23:45:29 np0005603435 systemd[1]: Started libvirt secret daemon.
Jan 30 23:45:29 np0005603435 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 30 23:45:29 np0005603435 NetworkManager[49097]: <info>  [1769834729.9205] manager: (tape2a2845b-61): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Jan 30 23:45:29 np0005603435 kernel: tape2a2845b-61: entered promiscuous mode
Jan 30 23:45:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:29Z|00027|binding|INFO|Claiming lport e2a2845b-61f2-4c1a-ab7e-89ce08066e21 for this chassis.
Jan 30 23:45:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:29Z|00028|binding|INFO|e2a2845b-61f2-4c1a-ab7e-89ce08066e21: Claiming fa:16:3e:b0:a8:25 10.100.0.10
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.924 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.926 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:29.938 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:a8:25 10.100.0.10'], port_security=['fa:16:3e:b0:a8:25 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a4cae87c-b7f1-42ce-836c-8effc2fd4de5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10e8924d-47c6-46fb-ba57-a83ece22f2a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '293dba0b4ad14f1cb4a3b761ad5fd07a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dbc823c-5133-4d56-a924-b1c6ee24fb70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=001d4ddf-9e51-4e8e-8acc-4e9dc46f08f1, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=e2a2845b-61f2-4c1a-ab7e-89ce08066e21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:45:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:29.939 156017 INFO neutron.agent.ovn.metadata.agent [-] Port e2a2845b-61f2-4c1a-ab7e-89ce08066e21 in datapath 10e8924d-47c6-46fb-ba57-a83ece22f2a9 bound to our chassis#033[00m
Jan 30 23:45:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:29.942 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10e8924d-47c6-46fb-ba57-a83ece22f2a9#033[00m
Jan 30 23:45:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:29.943 156017 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp76mm_auq/privsep.sock']#033[00m
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.966 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:29Z|00029|binding|INFO|Setting lport e2a2845b-61f2-4c1a-ab7e-89ce08066e21 ovn-installed in OVS
Jan 30 23:45:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:29Z|00030|binding|INFO|Setting lport e2a2845b-61f2-4c1a-ab7e-89ce08066e21 up in Southbound
Jan 30 23:45:29 np0005603435 nova_compute[239938]: 2026-01-31 04:45:29.969 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:29 np0005603435 systemd-udevd[247581]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:45:29 np0005603435 systemd-machined[208030]: New machine qemu-1-instance-00000001.
Jan 30 23:45:29 np0005603435 NetworkManager[49097]: <info>  [1769834729.9854] device (tape2a2845b-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:45:29 np0005603435 NetworkManager[49097]: <info>  [1769834729.9862] device (tape2a2845b-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:45:29 np0005603435 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 30 23:45:30 np0005603435 podman[247562]: 2026-01-31 04:45:29.999307416 +0000 UTC m=+0.055372813 container create 54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ritchie, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:45:30 np0005603435 systemd[1]: Started libpod-conmon-54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6.scope.
Jan 30 23:45:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:45:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e810a55f204a51ec264663289d3db19f2a745fbe5d120b2b9f4371ae2302fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e810a55f204a51ec264663289d3db19f2a745fbe5d120b2b9f4371ae2302fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e810a55f204a51ec264663289d3db19f2a745fbe5d120b2b9f4371ae2302fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e810a55f204a51ec264663289d3db19f2a745fbe5d120b2b9f4371ae2302fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:30 np0005603435 podman[247562]: 2026-01-31 04:45:29.978468951 +0000 UTC m=+0.034534378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:45:30 np0005603435 podman[247562]: 2026-01-31 04:45:30.097498716 +0000 UTC m=+0.153564113 container init 54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:45:30 np0005603435 podman[247562]: 2026-01-31 04:45:30.108997114 +0000 UTC m=+0.165062511 container start 54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 30 23:45:30 np0005603435 podman[247562]: 2026-01-31 04:45:30.113155675 +0000 UTC m=+0.169221092 container attach 54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ritchie, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:45:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4002472269' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4002472269' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.212 239942 DEBUG nova.compute.manager [req-601691ca-6383-4c4b-ab8c-88484bff22c2 req-21bfe905-8904-4e79-bcb5-1f1089a2a4c3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received event network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.212 239942 DEBUG oslo_concurrency.lockutils [req-601691ca-6383-4c4b-ab8c-88484bff22c2 req-21bfe905-8904-4e79-bcb5-1f1089a2a4c3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.213 239942 DEBUG oslo_concurrency.lockutils [req-601691ca-6383-4c4b-ab8c-88484bff22c2 req-21bfe905-8904-4e79-bcb5-1f1089a2a4c3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.213 239942 DEBUG oslo_concurrency.lockutils [req-601691ca-6383-4c4b-ab8c-88484bff22c2 req-21bfe905-8904-4e79-bcb5-1f1089a2a4c3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.213 239942 DEBUG nova.compute.manager [req-601691ca-6383-4c4b-ab8c-88484bff22c2 req-21bfe905-8904-4e79-bcb5-1f1089a2a4c3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Processing event network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]: {
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:    "0": [
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:        {
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "devices": [
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "/dev/loop3"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            ],
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_name": "ceph_lv0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_size": "21470642176",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "name": "ceph_lv0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "tags": {
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cluster_name": "ceph",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.crush_device_class": "",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.encrypted": "0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.objectstore": "bluestore",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osd_id": "0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.type": "block",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.vdo": "0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.with_tpm": "0"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            },
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "type": "block",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "vg_name": "ceph_vg0"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:        }
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:    ],
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:    "1": [
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:        {
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "devices": [
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "/dev/loop4"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            ],
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_name": "ceph_lv1",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_size": "21470642176",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "name": "ceph_lv1",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "tags": {
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cluster_name": "ceph",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.crush_device_class": "",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.encrypted": "0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.objectstore": "bluestore",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osd_id": "1",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.type": "block",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.vdo": "0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.with_tpm": "0"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            },
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "type": "block",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "vg_name": "ceph_vg1"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:        }
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:    ],
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:    "2": [
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:        {
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "devices": [
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "/dev/loop5"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            ],
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_name": "ceph_lv2",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_size": "21470642176",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "name": "ceph_lv2",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "tags": {
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.cluster_name": "ceph",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.crush_device_class": "",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.encrypted": "0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.objectstore": "bluestore",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osd_id": "2",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.type": "block",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.vdo": "0",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:                "ceph.with_tpm": "0"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            },
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "type": "block",
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:            "vg_name": "ceph_vg2"
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:        }
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]:    ]
Jan 30 23:45:30 np0005603435 gracious_ritchie[247602]: }
Jan 30 23:45:30 np0005603435 systemd[1]: libpod-54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6.scope: Deactivated successfully.
Jan 30 23:45:30 np0005603435 podman[247562]: 2026-01-31 04:45:30.45587782 +0000 UTC m=+0.511943217 container died 54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ritchie, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:45:30 np0005603435 systemd[1]: var-lib-containers-storage-overlay-47e810a55f204a51ec264663289d3db19f2a745fbe5d120b2b9f4371ae2302fb-merged.mount: Deactivated successfully.
Jan 30 23:45:30 np0005603435 podman[247562]: 2026-01-31 04:45:30.516742235 +0000 UTC m=+0.572807652 container remove 54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:45:30 np0005603435 systemd[1]: libpod-conmon-54a34f1989abc38cc87f8aaa06ce337f7d224e60ae3e044c1571db55838050b6.scope: Deactivated successfully.
Jan 30 23:45:30 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:30.618 156017 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 30 23:45:30 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:30.619 156017 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp76mm_auq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 30 23:45:30 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:30.475 247621 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 30 23:45:30 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:30.481 247621 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 30 23:45:30 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:30.485 247621 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 30 23:45:30 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:30.486 247621 INFO oslo.privsep.daemon [-] privsep daemon running as pid 247621#033[00m
Jan 30 23:45:30 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:30.622 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4551201f-387d-48ef-99b5-63f39a31f56c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.773 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.774 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834730.7739203, a4cae87c-b7f1-42ce-836c-8effc2fd4de5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.774 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] VM Started (Lifecycle Event)#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.779 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.782 239942 INFO nova.virt.libvirt.driver [-] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Instance spawned successfully.#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.783 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.907 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.907 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.909 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.909 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.910 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.910 239942 DEBUG nova.virt.libvirt.driver [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.943 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.946 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.972 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.973 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834730.7740238, a4cae87c-b7f1-42ce-836c-8effc2fd4de5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.973 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.980 239942 INFO nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Took 12.89 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.981 239942 DEBUG nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:30 np0005603435 podman[247734]: 2026-01-31 04:45:30.985422423 +0000 UTC m=+0.043021023 container create a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.988 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.994 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834730.7783496, a4cae87c-b7f1-42ce-836c-8effc2fd4de5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:45:30 np0005603435 nova_compute[239938]: 2026-01-31 04:45:30.994 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:45:31 np0005603435 nova_compute[239938]: 2026-01-31 04:45:31.016 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:31 np0005603435 nova_compute[239938]: 2026-01-31 04:45:31.018 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:45:31 np0005603435 systemd[1]: Started libpod-conmon-a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9.scope.
Jan 30 23:45:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 30 23:45:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 30 23:45:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 30 23:45:31 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:45:31 np0005603435 nova_compute[239938]: 2026-01-31 04:45:31.053 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:45:31 np0005603435 podman[247734]: 2026-01-31 04:45:30.967169311 +0000 UTC m=+0.024767961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:45:31 np0005603435 nova_compute[239938]: 2026-01-31 04:45:31.069 239942 INFO nova.compute.manager [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Took 14.04 seconds to build instance.#033[00m
Jan 30 23:45:31 np0005603435 podman[247734]: 2026-01-31 04:45:31.070225328 +0000 UTC m=+0.127823968 container init a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_pike, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:45:31 np0005603435 podman[247734]: 2026-01-31 04:45:31.077325021 +0000 UTC m=+0.134923631 container start a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:45:31 np0005603435 priceless_pike[247750]: 167 167
Jan 30 23:45:31 np0005603435 systemd[1]: libpod-a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9.scope: Deactivated successfully.
Jan 30 23:45:31 np0005603435 podman[247734]: 2026-01-31 04:45:31.085302184 +0000 UTC m=+0.142900814 container attach a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:45:31 np0005603435 podman[247734]: 2026-01-31 04:45:31.085623242 +0000 UTC m=+0.143221852 container died a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_pike, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:45:31 np0005603435 nova_compute[239938]: 2026-01-31 04:45:31.091 239942 DEBUG oslo_concurrency.lockutils [None req-ce79d919-e97a-4105-b300-035d12c1c379 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-de56b84df6b3585df3962747d9aed8452b84575a5b85a79c5f0165a01109427b-merged.mount: Deactivated successfully.
Jan 30 23:45:31 np0005603435 podman[247734]: 2026-01-31 04:45:31.142342096 +0000 UTC m=+0.199940696 container remove a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_pike, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:45:31 np0005603435 systemd[1]: libpod-conmon-a429ade11ef4a7b228a018ed315e2becca3a4216ef13ee545c9214e12580d4b9.scope: Deactivated successfully.
Jan 30 23:45:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.8 MiB/s wr, 80 op/s
Jan 30 23:45:31 np0005603435 podman[247772]: 2026-01-31 04:45:31.305162972 +0000 UTC m=+0.058930209 container create dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:45:31 np0005603435 systemd[1]: Started libpod-conmon-dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e.scope.
Jan 30 23:45:31 np0005603435 podman[247772]: 2026-01-31 04:45:31.280711529 +0000 UTC m=+0.034478796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:45:31 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:45:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806ed074edbd7507630c08b9b7ded92aa93c73219fa93608bdf3358116fe0b3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806ed074edbd7507630c08b9b7ded92aa93c73219fa93608bdf3358116fe0b3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806ed074edbd7507630c08b9b7ded92aa93c73219fa93608bdf3358116fe0b3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806ed074edbd7507630c08b9b7ded92aa93c73219fa93608bdf3358116fe0b3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:31 np0005603435 podman[247772]: 2026-01-31 04:45:31.405207096 +0000 UTC m=+0.158974343 container init dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:45:31 np0005603435 podman[247772]: 2026-01-31 04:45:31.411111999 +0000 UTC m=+0.164879236 container start dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:45:31 np0005603435 podman[247772]: 2026-01-31 04:45:31.41485199 +0000 UTC m=+0.168619227 container attach dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:45:31 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:31.539 247621 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:31 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:31.539 247621 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:31 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:31.539 247621 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:32 np0005603435 lvm[247863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:45:32 np0005603435 lvm[247863]: VG ceph_vg0 finished
Jan 30 23:45:32 np0005603435 lvm[247865]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:45:32 np0005603435 lvm[247865]: VG ceph_vg1 finished
Jan 30 23:45:32 np0005603435 lvm[247866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:45:32 np0005603435 lvm[247866]: VG ceph_vg2 finished
Jan 30 23:45:32 np0005603435 pedantic_dhawan[247788]: {}
Jan 30 23:45:32 np0005603435 systemd[1]: libpod-dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e.scope: Deactivated successfully.
Jan 30 23:45:32 np0005603435 podman[247772]: 2026-01-31 04:45:32.221531248 +0000 UTC m=+0.975298485 container died dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:45:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay-806ed074edbd7507630c08b9b7ded92aa93c73219fa93608bdf3358116fe0b3f-merged.mount: Deactivated successfully.
Jan 30 23:45:32 np0005603435 podman[247772]: 2026-01-31 04:45:32.304998791 +0000 UTC m=+1.058766018 container remove dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:45:32 np0005603435 systemd[1]: libpod-conmon-dab60d2d29f3910f915c8613e768e2013b8ae2e95884552da2590ef8ac7f932e.scope: Deactivated successfully.
Jan 30 23:45:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:45:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:45:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:45:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:45:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:32.650 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[234a452b-e8ee-4961-8095-bfce8d7a2564]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:32.651 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10e8924d-41 in ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:45:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:32.653 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10e8924d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:45:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:32.653 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c2898985-0ca3-4984-a6a6-610b7751c573]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:32.657 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[027f6501-3b28-4e98-af17-a015f151b2bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:32.672 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[045ab292-2105-4082-8e88-46d74438330d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:32.690 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7091570a-26d4-41b4-9766-aa0f70fb0a7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:32.691 156017 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpruc2sbi5/privsep.sock']#033[00m
Jan 30 23:45:32 np0005603435 nova_compute[239938]: 2026-01-31 04:45:32.697 239942 DEBUG nova.compute.manager [req-3f8d5e3d-4009-476b-9b70-80c2dfdedd12 req-6584411e-cf3f-45a9-a9a1-262c5bba1e87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received event network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:32 np0005603435 nova_compute[239938]: 2026-01-31 04:45:32.697 239942 DEBUG oslo_concurrency.lockutils [req-3f8d5e3d-4009-476b-9b70-80c2dfdedd12 req-6584411e-cf3f-45a9-a9a1-262c5bba1e87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:32 np0005603435 nova_compute[239938]: 2026-01-31 04:45:32.698 239942 DEBUG oslo_concurrency.lockutils [req-3f8d5e3d-4009-476b-9b70-80c2dfdedd12 req-6584411e-cf3f-45a9-a9a1-262c5bba1e87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:32 np0005603435 nova_compute[239938]: 2026-01-31 04:45:32.698 239942 DEBUG oslo_concurrency.lockutils [req-3f8d5e3d-4009-476b-9b70-80c2dfdedd12 req-6584411e-cf3f-45a9-a9a1-262c5bba1e87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:32 np0005603435 nova_compute[239938]: 2026-01-31 04:45:32.698 239942 DEBUG nova.compute.manager [req-3f8d5e3d-4009-476b-9b70-80c2dfdedd12 req-6584411e-cf3f-45a9-a9a1-262c5bba1e87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] No waiting events found dispatching network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:45:32 np0005603435 nova_compute[239938]: 2026-01-31 04:45:32.698 239942 WARNING nova.compute.manager [req-3f8d5e3d-4009-476b-9b70-80c2dfdedd12 req-6584411e-cf3f-45a9-a9a1-262c5bba1e87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received unexpected event network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:45:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540183501' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540183501' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.286 156017 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.287 156017 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpruc2sbi5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.174 247914 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.179 247914 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.182 247914 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.182 247914 INFO oslo.privsep.daemon [-] privsep daemon running as pid 247914#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.292 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a64629-3162-4a90-a9e0-df1c078e54b7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:45:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:45:33 np0005603435 nova_compute[239938]: 2026-01-31 04:45:33.626 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.727 247914 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.727 247914 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:33 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:33.728 247914 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:33 np0005603435 nova_compute[239938]: 2026-01-31 04:45:33.889 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "f50cfc01-3561-48d6-8426-5da90fc04271" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:33 np0005603435 nova_compute[239938]: 2026-01-31 04:45:33.889 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:33 np0005603435 nova_compute[239938]: 2026-01-31 04:45:33.987 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.131 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.290 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[3e569ddd-c0aa-47d7-815c-fee1ef9183ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 NetworkManager[49097]: <info>  [1769834734.3170] manager: (tap10e8924d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.310 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9dcbf030-050a-4bf2-99c8-fa58175176f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.335 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9667c0-0bd8-4869-a00d-109a584185a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.338 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[6de9d2ab-b2de-4912-928c-9c918d9f867a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 systemd-udevd[247927]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:45:34 np0005603435 NetworkManager[49097]: <info>  [1769834734.3646] device (tap10e8924d-40): carrier: link connected
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.369 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[bd668a1e-0cf1-491a-b2d2-4e28359eac85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.388 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[66345cb2-021e-4f11-aa8b-51caa9e1b1b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10e8924d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:38:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378876, 'reachable_time': 24038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247944, 'error': None, 'target': 'ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.406 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[001306bb-f438-496d-a0ee-0280844707f3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:38b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 378876, 'tstamp': 378876}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247945, 'error': None, 'target': 'ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.422 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2b72931c-3595-4f7e-afbe-2139207e6d24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10e8924d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:38:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378876, 'reachable_time': 24038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247946, 'error': None, 'target': 'ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.447 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[51669cbf-4f92-4cb0-8b0a-8957778a587a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.505 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d5e30f-319f-4cab-a891-9eca14eb6dcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.507 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10e8924d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.507 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.508 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10e8924d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.511 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:34 np0005603435 kernel: tap10e8924d-40: entered promiscuous mode
Jan 30 23:45:34 np0005603435 NetworkManager[49097]: <info>  [1769834734.5125] manager: (tap10e8924d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.518 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10e8924d-40, col_values=(('external_ids', {'iface-id': 'a7d07f19-44bc-4474-9322-db35f7b6589c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.519 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:34 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:34Z|00031|binding|INFO|Releasing lport a7d07f19-44bc-4474-9322-db35f7b6589c from this chassis (sb_readonly=0)
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.521 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10e8924d-47c6-46fb-ba57-a83ece22f2a9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10e8924d-47c6-46fb-ba57-a83ece22f2a9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.522 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9382ee1d-3fd2-4977-8cbd-80013b035478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.524 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-10e8924d-47c6-46fb-ba57-a83ece22f2a9
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/10e8924d-47c6-46fb-ba57-a83ece22f2a9.pid.haproxy
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 10e8924d-47c6-46fb-ba57-a83ece22f2a9
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.526 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:34 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:34.525 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9', 'env', 'PROCESS_TAG=haproxy-10e8924d-47c6-46fb-ba57-a83ece22f2a9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10e8924d-47c6-46fb-ba57-a83ece22f2a9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.687 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.687 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.705 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.705 239942 INFO nova.compute.claims [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:45:34 np0005603435 nova_compute[239938]: 2026-01-31 04:45:34.854 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:34 np0005603435 podman[247979]: 2026-01-31 04:45:34.883329504 +0000 UTC m=+0.067700612 container create bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 30 23:45:34 np0005603435 systemd[1]: Started libpod-conmon-bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0.scope.
Jan 30 23:45:34 np0005603435 podman[247979]: 2026-01-31 04:45:34.843827396 +0000 UTC m=+0.028198564 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:45:34 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:45:34 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac657a0755c54c5c49caf9c9f4c3f825106965906f81f40077b3cf27ea11fa8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:34 np0005603435 podman[247979]: 2026-01-31 04:45:34.974003801 +0000 UTC m=+0.158374929 container init bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 30 23:45:34 np0005603435 podman[247979]: 2026-01-31 04:45:34.980026637 +0000 UTC m=+0.164397735 container start bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:45:35 np0005603435 neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9[247994]: [NOTICE]   (248011) : New worker (248019) forked
Jan 30 23:45:35 np0005603435 neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9[247994]: [NOTICE]   (248011) : Loading success.
Jan 30 23:45:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 30 23:45:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:45:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484197421' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.399 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.407 239942 DEBUG nova.compute.provider_tree [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.446 239942 ERROR nova.scheduler.client.report [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [req-63de19f1-2789-497f-9ef9-671cb6cf95af] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 4d0a6937-09c9-4e01-94bd-2812940db2bc.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-63de19f1-2789-497f-9ef9-671cb6cf95af"}]}#033[00m
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.464 239942 DEBUG nova.scheduler.client.report [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Refreshing inventories for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.482 239942 DEBUG nova.scheduler.client.report [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Updating ProviderTree inventory for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.483 239942 DEBUG nova.compute.provider_tree [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.507 239942 DEBUG nova.scheduler.client.report [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Refreshing aggregate associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.555 239942 DEBUG nova.scheduler.client.report [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Refreshing trait associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, traits: COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 30 23:45:35 np0005603435 nova_compute[239938]: 2026-01-31 04:45:35.613 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.078 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:36.080 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:45:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:36.083 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:45:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:45:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019992868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.176 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.182 239942 DEBUG nova.compute.provider_tree [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.260 239942 DEBUG nova.scheduler.client.report [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Updated inventory for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.261 239942 DEBUG nova.compute.provider_tree [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Updating resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.261 239942 DEBUG nova.compute.provider_tree [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.290 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.292 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.358 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.359 239942 DEBUG nova.network.neutron [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.381 239942 INFO nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.406 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.492 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.494 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.494 239942 INFO nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Creating image(s)#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.517 239942 DEBUG nova.storage.rbd_utils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image f50cfc01-3561-48d6-8426-5da90fc04271_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.541 239942 DEBUG nova.storage.rbd_utils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image f50cfc01-3561-48d6-8426-5da90fc04271_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.571 239942 DEBUG nova.storage.rbd_utils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image f50cfc01-3561-48d6-8426-5da90fc04271_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.575 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.589 239942 DEBUG nova.policy [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a60e5ee062304ce4b921d51a9d0be89f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3bdc8fbcac3b419ca374be1c490a20e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.621 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.622 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.623 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.624 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.652 239942 DEBUG nova.storage.rbd_utils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image f50cfc01-3561-48d6-8426-5da90fc04271_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:36 np0005603435 nova_compute[239938]: 2026-01-31 04:45:36.655 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 f50cfc01-3561-48d6-8426-5da90fc04271_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:45:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:45:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:45:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:45:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:45:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.089 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 f50cfc01-3561-48d6-8426-5da90fc04271_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 137 op/s
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.164 239942 DEBUG nova.storage.rbd_utils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] resizing rbd image f50cfc01-3561-48d6-8426-5da90fc04271_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.263 239942 DEBUG nova.objects.instance [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lazy-loading 'migration_context' on Instance uuid f50cfc01-3561-48d6-8426-5da90fc04271 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.297 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.298 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Ensure instance console log exists: /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.299 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.299 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.300 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:37 np0005603435 nova_compute[239938]: 2026-01-31 04:45:37.600 239942 DEBUG nova.network.neutron [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Successfully created port: 14b721ad-6ad1-4224-bc45-cccbe4643cd9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.375 239942 DEBUG nova.network.neutron [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Successfully updated port: 14b721ad-6ad1-4224-bc45-cccbe4643cd9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.397 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "refresh_cache-f50cfc01-3561-48d6-8426-5da90fc04271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.397 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquired lock "refresh_cache-f50cfc01-3561-48d6-8426-5da90fc04271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.398 239942 DEBUG nova.network.neutron [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.629 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.633 239942 DEBUG nova.network.neutron [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.878 239942 DEBUG nova.compute.manager [req-ff27bae0-3eb8-4a1e-8da1-bd55677968f7 req-d5b8d08e-77c8-4e2c-9e52-5873a3bceaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received event network-changed-14b721ad-6ad1-4224-bc45-cccbe4643cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.879 239942 DEBUG nova.compute.manager [req-ff27bae0-3eb8-4a1e-8da1-bd55677968f7 req-d5b8d08e-77c8-4e2c-9e52-5873a3bceaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Refreshing instance network info cache due to event network-changed-14b721ad-6ad1-4224-bc45-cccbe4643cd9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.880 239942 DEBUG oslo_concurrency.lockutils [req-ff27bae0-3eb8-4a1e-8da1-bd55677968f7 req-d5b8d08e-77c8-4e2c-9e52-5873a3bceaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-f50cfc01-3561-48d6-8426-5da90fc04271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:45:38 np0005603435 nova_compute[239938]: 2026-01-31 04:45:38.991 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 104 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Jan 30 23:45:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213060926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213060926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.442 239942 DEBUG nova.network.neutron [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Updating instance_info_cache with network_info: [{"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.501 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Releasing lock "refresh_cache-f50cfc01-3561-48d6-8426-5da90fc04271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.502 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Instance network_info: |[{"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.503 239942 DEBUG oslo_concurrency.lockutils [req-ff27bae0-3eb8-4a1e-8da1-bd55677968f7 req-d5b8d08e-77c8-4e2c-9e52-5873a3bceaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-f50cfc01-3561-48d6-8426-5da90fc04271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.504 239942 DEBUG nova.network.neutron [req-ff27bae0-3eb8-4a1e-8da1-bd55677968f7 req-d5b8d08e-77c8-4e2c-9e52-5873a3bceaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Refreshing network info cache for port 14b721ad-6ad1-4224-bc45-cccbe4643cd9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.510 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Start _get_guest_xml network_info=[{"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.518 239942 WARNING nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.523 239942 DEBUG nova.virt.libvirt.host [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.524 239942 DEBUG nova.virt.libvirt.host [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.528 239942 DEBUG nova.virt.libvirt.host [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.529 239942 DEBUG nova.virt.libvirt.host [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.530 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.531 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.532 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.532 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.533 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.534 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.535 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.536 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.536 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.537 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.538 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.538 239942 DEBUG nova.virt.hardware [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:45:39 np0005603435 nova_compute[239938]: 2026-01-31 04:45:39.546 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:45:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417945107' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.067 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.103 239942 DEBUG nova.storage.rbd_utils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image f50cfc01-3561-48d6-8426-5da90fc04271_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.109 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.441 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.442 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.442 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.443 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.443 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.445 239942 INFO nova.compute.manager [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Terminating instance#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.447 239942 DEBUG nova.compute.manager [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:45:40 np0005603435 kernel: tape2a2845b-61 (unregistering): left promiscuous mode
Jan 30 23:45:40 np0005603435 NetworkManager[49097]: <info>  [1769834740.5140] device (tape2a2845b-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.518 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.529 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:40Z|00032|binding|INFO|Releasing lport e2a2845b-61f2-4c1a-ab7e-89ce08066e21 from this chassis (sb_readonly=0)
Jan 30 23:45:40 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:40Z|00033|binding|INFO|Setting lport e2a2845b-61f2-4c1a-ab7e-89ce08066e21 down in Southbound
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.532 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:40Z|00034|binding|INFO|Removing iface tape2a2845b-61 ovn-installed in OVS
Jan 30 23:45:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:40.541 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:a8:25 10.100.0.10'], port_security=['fa:16:3e:b0:a8:25 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a4cae87c-b7f1-42ce-836c-8effc2fd4de5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10e8924d-47c6-46fb-ba57-a83ece22f2a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '293dba0b4ad14f1cb4a3b761ad5fd07a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dbc823c-5133-4d56-a924-b1c6ee24fb70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=001d4ddf-9e51-4e8e-8acc-4e9dc46f08f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=e2a2845b-61f2-4c1a-ab7e-89ce08066e21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:45:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:40.542 156017 INFO neutron.agent.ovn.metadata.agent [-] Port e2a2845b-61f2-4c1a-ab7e-89ce08066e21 in datapath 10e8924d-47c6-46fb-ba57-a83ece22f2a9 unbound from our chassis#033[00m
Jan 30 23:45:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:40.544 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10e8924d-47c6-46fb-ba57-a83ece22f2a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:45:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:40.545 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4d060e3f-4441-4d0f-a175-021c8dfff18b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:40.546 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9 namespace which is not needed anymore#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.546 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 30 23:45:40 np0005603435 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 10.195s CPU time.
Jan 30 23:45:40 np0005603435 systemd-machined[208030]: Machine qemu-1-instance-00000001 terminated.
Jan 30 23:45:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:45:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675430535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.641 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.643 239942 DEBUG nova.virt.libvirt.vif [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-985104523',display_name='tempest-VolumesActionsTest-instance-985104523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-985104523',id=2,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3bdc8fbcac3b419ca374be1c490a20e5',ramdisk_id='',reservation_id='r-h7z59rdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1818515503',owner_user_name='tempest-VolumesActionsTest-1818515503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:45:36Z,user_data=None,user_id='a60e5ee062304ce4b921d51a9d0be89f',uuid=f50cfc01-3561-48d6-8426-5da90fc04271,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.643 239942 DEBUG nova.network.os_vif_util [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converting VIF {"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.644 239942 DEBUG nova.network.os_vif_util [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:71:12,bridge_name='br-int',has_traffic_filtering=True,id=14b721ad-6ad1-4224-bc45-cccbe4643cd9,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b721ad-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.644 239942 DEBUG nova.objects.instance [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid f50cfc01-3561-48d6-8426-5da90fc04271 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.666 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <uuid>f50cfc01-3561-48d6-8426-5da90fc04271</uuid>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <name>instance-00000002</name>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesActionsTest-instance-985104523</nova:name>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:45:39</nova:creationTime>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <nova:user uuid="a60e5ee062304ce4b921d51a9d0be89f">tempest-VolumesActionsTest-1818515503-project-member</nova:user>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <nova:project uuid="3bdc8fbcac3b419ca374be1c490a20e5">tempest-VolumesActionsTest-1818515503</nova:project>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <nova:port uuid="14b721ad-6ad1-4224-bc45-cccbe4643cd9">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <entry name="serial">f50cfc01-3561-48d6-8426-5da90fc04271</entry>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <entry name="uuid">f50cfc01-3561-48d6-8426-5da90fc04271</entry>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/f50cfc01-3561-48d6-8426-5da90fc04271_disk">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/f50cfc01-3561-48d6-8426-5da90fc04271_disk.config">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:6d:71:12"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <target dev="tap14b721ad-6a"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271/console.log" append="off"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:45:40 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:45:40 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:45:40 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:45:40 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.666 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Preparing to wait for external event network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.666 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.666 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.667 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.667 239942 DEBUG nova.virt.libvirt.vif [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-985104523',display_name='tempest-VolumesActionsTest-instance-985104523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-985104523',id=2,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3bdc8fbcac3b419ca374be1c490a20e5',ramdisk_id='',reservation_id='r-h7z59rdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1818515503',owner_user_name='tempest-VolumesActionsTest-1818515503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:45:36Z,user_data=None,user_id='a60e5ee062304ce4b921d51a9d0be89f',uuid=f50cfc01-3561-48d6-8426-5da90fc04271,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.667 239942 DEBUG nova.network.os_vif_util [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converting VIF {"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.668 239942 DEBUG nova.network.os_vif_util [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:71:12,bridge_name='br-int',has_traffic_filtering=True,id=14b721ad-6ad1-4224-bc45-cccbe4643cd9,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b721ad-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.668 239942 DEBUG os_vif [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:71:12,bridge_name='br-int',has_traffic_filtering=True,id=14b721ad-6ad1-4224-bc45-cccbe4643cd9,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b721ad-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.669 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.669 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.670 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.670 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.672 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.672 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14b721ad-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.673 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14b721ad-6a, col_values=(('external_ids', {'iface-id': '14b721ad-6ad1-4224-bc45-cccbe4643cd9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:71:12', 'vm-uuid': 'f50cfc01-3561-48d6-8426-5da90fc04271'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.675 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 NetworkManager[49097]: <info>  [1769834740.6759] manager: (tap14b721ad-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.677 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.681 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.682 239942 INFO os_vif [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:71:12,bridge_name='br-int',has_traffic_filtering=True,id=14b721ad-6ad1-4224-bc45-cccbe4643cd9,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b721ad-6a')#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.687 239942 DEBUG nova.network.neutron [req-ff27bae0-3eb8-4a1e-8da1-bd55677968f7 req-d5b8d08e-77c8-4e2c-9e52-5873a3bceaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Updated VIF entry in instance network info cache for port 14b721ad-6ad1-4224-bc45-cccbe4643cd9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.687 239942 DEBUG nova.network.neutron [req-ff27bae0-3eb8-4a1e-8da1-bd55677968f7 req-d5b8d08e-77c8-4e2c-9e52-5873a3bceaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Updating instance_info_cache with network_info: [{"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.692 239942 INFO nova.virt.libvirt.driver [-] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Instance destroyed successfully.#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.693 239942 DEBUG nova.objects.instance [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lazy-loading 'resources' on Instance uuid a4cae87c-b7f1-42ce-836c-8effc2fd4de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:45:40 np0005603435 neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9[247994]: [NOTICE]   (248011) : haproxy version is 2.8.14-c23fe91
Jan 30 23:45:40 np0005603435 neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9[247994]: [NOTICE]   (248011) : path to executable is /usr/sbin/haproxy
Jan 30 23:45:40 np0005603435 neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9[247994]: [WARNING]  (248011) : Exiting Master process...
Jan 30 23:45:40 np0005603435 neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9[247994]: [ALERT]    (248011) : Current worker (248019) exited with code 143 (Terminated)
Jan 30 23:45:40 np0005603435 neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9[247994]: [WARNING]  (248011) : All workers exited. Exiting... (0)
Jan 30 23:45:40 np0005603435 systemd[1]: libpod-bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0.scope: Deactivated successfully.
Jan 30 23:45:40 np0005603435 podman[248305]: 2026-01-31 04:45:40.716114732 +0000 UTC m=+0.072141159 container died bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.736 239942 DEBUG oslo_concurrency.lockutils [req-ff27bae0-3eb8-4a1e-8da1-bd55677968f7 req-d5b8d08e-77c8-4e2c-9e52-5873a3bceaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-f50cfc01-3561-48d6-8426-5da90fc04271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.737 239942 DEBUG nova.virt.libvirt.vif [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:45:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-255133828',display_name='tempest-VolumesActionsTest-instance-255133828',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-255133828',id=1,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:45:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='293dba0b4ad14f1cb4a3b761ad5fd07a',ramdisk_id='',reservation_id='r-hsyfrpbb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-740629801',owner_user_name='tempest-VolumesActionsTest-740629801-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:45:31Z,user_data=None,user_id='b64529ecf0d54f718c07683e4fe74bc1',uuid=a4cae87c-b7f1-42ce-836c-8effc2fd4de5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.738 239942 DEBUG nova.network.os_vif_util [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Converting VIF {"id": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "address": "fa:16:3e:b0:a8:25", "network": {"id": "10e8924d-47c6-46fb-ba57-a83ece22f2a9", "bridge": "br-int", "label": "tempest-VolumesActionsTest-393692378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "293dba0b4ad14f1cb4a3b761ad5fd07a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2a2845b-61", "ovs_interfaceid": "e2a2845b-61f2-4c1a-ab7e-89ce08066e21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.738 239942 DEBUG nova.network.os_vif_util [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a8:25,bridge_name='br-int',has_traffic_filtering=True,id=e2a2845b-61f2-4c1a-ab7e-89ce08066e21,network=Network(10e8924d-47c6-46fb-ba57-a83ece22f2a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2a2845b-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.739 239942 DEBUG os_vif [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a8:25,bridge_name='br-int',has_traffic_filtering=True,id=e2a2845b-61f2-4c1a-ab7e-89ce08066e21,network=Network(10e8924d-47c6-46fb-ba57-a83ece22f2a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2a2845b-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.741 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.742 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2a2845b-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.783 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.785 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.787 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.790 239942 INFO os_vif [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:a8:25,bridge_name='br-int',has_traffic_filtering=True,id=e2a2845b-61f2-4c1a-ab7e-89ce08066e21,network=Network(10e8924d-47c6-46fb-ba57-a83ece22f2a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2a2845b-61')#033[00m
Jan 30 23:45:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0-userdata-shm.mount: Deactivated successfully.
Jan 30 23:45:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay-aac657a0755c54c5c49caf9c9f4c3f825106965906f81f40077b3cf27ea11fa8-merged.mount: Deactivated successfully.
Jan 30 23:45:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:45:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818676153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.881 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.882 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.882 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] No VIF found with MAC fa:16:3e:6d:71:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.883 239942 INFO nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Using config drive#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.910 239942 DEBUG nova.storage.rbd_utils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image f50cfc01-3561-48d6-8426-5da90fc04271_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.960 239942 DEBUG nova.compute.manager [req-073e6e76-4e16-4be3-b9c6-5f413ca42642 req-597c9dfe-724a-49fa-bf6f-b877ecbd0e85 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received event network-vif-unplugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.960 239942 DEBUG oslo_concurrency.lockutils [req-073e6e76-4e16-4be3-b9c6-5f413ca42642 req-597c9dfe-724a-49fa-bf6f-b877ecbd0e85 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.962 239942 DEBUG oslo_concurrency.lockutils [req-073e6e76-4e16-4be3-b9c6-5f413ca42642 req-597c9dfe-724a-49fa-bf6f-b877ecbd0e85 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.963 239942 DEBUG oslo_concurrency.lockutils [req-073e6e76-4e16-4be3-b9c6-5f413ca42642 req-597c9dfe-724a-49fa-bf6f-b877ecbd0e85 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.963 239942 DEBUG nova.compute.manager [req-073e6e76-4e16-4be3-b9c6-5f413ca42642 req-597c9dfe-724a-49fa-bf6f-b877ecbd0e85 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] No waiting events found dispatching network-vif-unplugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:45:40 np0005603435 nova_compute[239938]: 2026-01-31 04:45:40.964 239942 DEBUG nova.compute.manager [req-073e6e76-4e16-4be3-b9c6-5f413ca42642 req-597c9dfe-724a-49fa-bf6f-b877ecbd0e85 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received event network-vif-unplugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:45:40 np0005603435 podman[248305]: 2026-01-31 04:45:40.988180486 +0000 UTC m=+0.344206943 container cleanup bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:45:40 np0005603435 systemd[1]: libpod-conmon-bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0.scope: Deactivated successfully.
Jan 30 23:45:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:41 np0005603435 podman[248385]: 2026-01-31 04:45:41.110868829 +0000 UTC m=+0.097425252 container remove bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.118 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8592f926-4e14-4fa7-8dd5-103a3f545172]: (4, ('Sat Jan 31 04:45:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9 (bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0)\nbf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0\nSat Jan 31 04:45:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9 (bf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0)\nbf72c74a0b1451abc691d597b12709e18f8cd0963ca005e30b7eb39568b682e0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.120 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e5db6ef1-7118-4839-9a85-afa5a15873c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.121 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10e8924d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:41 np0005603435 nova_compute[239938]: 2026-01-31 04:45:41.123 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:41 np0005603435 kernel: tap10e8924d-40: left promiscuous mode
Jan 30 23:45:41 np0005603435 nova_compute[239938]: 2026-01-31 04:45:41.134 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.138 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f83d9c5b-162d-41eb-8f27-d7c4f0ce0679]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.159 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8125dc06-5014-4c1e-96e1-840a1c8d0796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.161 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdd29f5-1fee-4bc7-8017-fcb03a4fecb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 117 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 99 op/s
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.175 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2142684e-b33f-48e0-b82f-eb4464479dbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 378868, 'reachable_time': 37900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248404, 'error': None, 'target': 'ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:41 np0005603435 systemd[1]: run-netns-ovnmeta\x2d10e8924d\x2d47c6\x2d46fb\x2dba57\x2da83ece22f2a9.mount: Deactivated successfully.
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.184 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10e8924d-47c6-46fb-ba57-a83ece22f2a9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:45:41 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:41.185 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ace04c-b736-4ca3-a4ec-01b864a6d5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 30 23:45:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 30 23:45:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 30 23:45:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:42.085 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:42 np0005603435 nova_compute[239938]: 2026-01-31 04:45:42.531 239942 INFO nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Creating config drive at /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271/disk.config#033[00m
Jan 30 23:45:42 np0005603435 nova_compute[239938]: 2026-01-31 04:45:42.536 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmvovw6lx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:42 np0005603435 nova_compute[239938]: 2026-01-31 04:45:42.659 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmvovw6lx" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:42 np0005603435 nova_compute[239938]: 2026-01-31 04:45:42.693 239942 DEBUG nova.storage.rbd_utils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image f50cfc01-3561-48d6-8426-5da90fc04271_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:45:42 np0005603435 nova_compute[239938]: 2026-01-31 04:45:42.697 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271/disk.config f50cfc01-3561-48d6-8426-5da90fc04271_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.094 239942 DEBUG nova.compute.manager [req-d9b9c87f-f32b-4d9f-bf1e-913d9a54183b req-21ada840-764f-4f1f-9fe7-5ac58b0c6c7c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received event network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.095 239942 DEBUG oslo_concurrency.lockutils [req-d9b9c87f-f32b-4d9f-bf1e-913d9a54183b req-21ada840-764f-4f1f-9fe7-5ac58b0c6c7c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.095 239942 DEBUG oslo_concurrency.lockutils [req-d9b9c87f-f32b-4d9f-bf1e-913d9a54183b req-21ada840-764f-4f1f-9fe7-5ac58b0c6c7c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:43 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.096 239942 DEBUG oslo_concurrency.lockutils [req-d9b9c87f-f32b-4d9f-bf1e-913d9a54183b req-21ada840-764f-4f1f-9fe7-5ac58b0c6c7c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:43 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.096 239942 DEBUG nova.compute.manager [req-d9b9c87f-f32b-4d9f-bf1e-913d9a54183b req-21ada840-764f-4f1f-9fe7-5ac58b0c6c7c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] No waiting events found dispatching network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.096 239942 WARNING nova.compute.manager [req-d9b9c87f-f32b-4d9f-bf1e-913d9a54183b req-21ada840-764f-4f1f-9fe7-5ac58b0c6c7c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received unexpected event network-vif-plugged-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 for instance with vm_state active and task_state deleting.#033[00m
Jan 30 23:45:43 np0005603435 podman[248444]: 2026-01-31 04:45:43.146023397 +0000 UTC m=+0.113091842 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 30 23:45:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 97 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 979 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 30 23:45:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 30 23:45:43 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.803 239942 INFO nova.virt.libvirt.driver [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Deleting instance files /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5_del#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.804 239942 INFO nova.virt.libvirt.driver [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Deletion of /var/lib/nova/instances/a4cae87c-b7f1-42ce-836c-8effc2fd4de5_del complete#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.920 239942 DEBUG nova.virt.libvirt.host [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.920 239942 INFO nova.virt.libvirt.host [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] UEFI support detected#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.921 239942 INFO nova.compute.manager [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Took 3.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.922 239942 DEBUG oslo.service.loopingcall [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.922 239942 DEBUG nova.compute.manager [-] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:45:43 np0005603435 nova_compute[239938]: 2026-01-31 04:45:43.922 239942 DEBUG nova.network.neutron [-] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.027 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.089 239942 DEBUG oslo_concurrency.processutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271/disk.config f50cfc01-3561-48d6-8426-5da90fc04271_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.089 239942 INFO nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Deleting local config drive /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271/disk.config because it was imported into RBD.#033[00m
Jan 30 23:45:44 np0005603435 kernel: tap14b721ad-6a: entered promiscuous mode
Jan 30 23:45:44 np0005603435 NetworkManager[49097]: <info>  [1769834744.1437] manager: (tap14b721ad-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Jan 30 23:45:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:44Z|00035|binding|INFO|Claiming lport 14b721ad-6ad1-4224-bc45-cccbe4643cd9 for this chassis.
Jan 30 23:45:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:44Z|00036|binding|INFO|14b721ad-6ad1-4224-bc45-cccbe4643cd9: Claiming fa:16:3e:6d:71:12 10.100.0.3
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.147 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.152 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.167 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:71:12 10.100.0.3'], port_security=['fa:16:3e:6d:71:12 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f50cfc01-3561-48d6-8426-5da90fc04271', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bdc8fbcac3b419ca374be1c490a20e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f73757ab-8ff2-4654-b537-c05855ab04c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e75c9d70-34ab-45c9-8a82-90b4b0f4bff4, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=14b721ad-6ad1-4224-bc45-cccbe4643cd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.170 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 14b721ad-6ad1-4224-bc45-cccbe4643cd9 in datapath c68aa38c-df33-4336-9b66-c410f7d93cb3 bound to our chassis#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.172 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c68aa38c-df33-4336-9b66-c410f7d93cb3#033[00m
Jan 30 23:45:44 np0005603435 systemd-machined[208030]: New machine qemu-2-instance-00000002.
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.185 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.187 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[332f9db5-4632-4558-b6a6-513351c12930]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:44Z|00037|binding|INFO|Setting lport 14b721ad-6ad1-4224-bc45-cccbe4643cd9 ovn-installed in OVS
Jan 30 23:45:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:44Z|00038|binding|INFO|Setting lport 14b721ad-6ad1-4224-bc45-cccbe4643cd9 up in Southbound
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.189 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc68aa38c-d1 in ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.189 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.192 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc68aa38c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.192 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cde1c953-eab2-4f11-9c52-dd578be23f4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.193 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[88274a80-892e-45b3-853c-4f1cc3c33fcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.205 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[1715aed9-1ec5-4e7d-bb61-811315c476a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 systemd-udevd[248492]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.231 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1156a03b-b177-4f10-bc14-8e19cc94448a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 NetworkManager[49097]: <info>  [1769834744.2390] device (tap14b721ad-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:45:44 np0005603435 NetworkManager[49097]: <info>  [1769834744.2410] device (tap14b721ad-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.269 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[db228184-75f8-474c-bfa1-d8dd777e2145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.275 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2feee0-1bab-4b40-ac91-cbc6f8252990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 systemd-udevd[248495]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:45:44 np0005603435 NetworkManager[49097]: <info>  [1769834744.2768] manager: (tapc68aa38c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.310 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[ea078544-d393-4bb8-806d-48555670bbc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.314 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[95fd0d82-5dc0-474f-b1b8-6cac25fdc358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 NetworkManager[49097]: <info>  [1769834744.3313] device (tapc68aa38c-d0): carrier: link connected
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.337 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[7e511526-e370-4a30-a00c-82bb4ec89602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.357 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cbac4869-1ff1-4d9b-b1e1-8b6c16df6fa5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc68aa38c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:e3:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379873, 'reachable_time': 22015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248522, 'error': None, 'target': 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.379 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[80db1dda-70a3-448e-a8c2-3cb135bc9424]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:e350'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 379873, 'tstamp': 379873}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248523, 'error': None, 'target': 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.397 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2689f9cf-59e7-4e26-856f-a80c45190888]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc68aa38c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:e3:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379873, 'reachable_time': 22015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248524, 'error': None, 'target': 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.434 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0e7285-808a-4d8d-af19-8d63aa6ed691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.497 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9976c2-4576-4d3c-b514-0dd89d29c348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.499 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc68aa38c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.499 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.500 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc68aa38c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.502 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 NetworkManager[49097]: <info>  [1769834744.5036] manager: (tapc68aa38c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 30 23:45:44 np0005603435 kernel: tapc68aa38c-d0: entered promiscuous mode
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.506 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.511 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc68aa38c-d0, col_values=(('external_ids', {'iface-id': 'e4623bae-4ba2-4934-a8d4-cf715fe5be3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.513 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:44Z|00039|binding|INFO|Releasing lport e4623bae-4ba2-4934-a8d4-cf715fe5be3c from this chassis (sb_readonly=0)
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.514 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.517 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c68aa38c-df33-4336-9b66-c410f7d93cb3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c68aa38c-df33-4336-9b66-c410f7d93cb3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.518 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[48c99ecf-73b1-4297-a1a0-8aaf9f864565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.519 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-c68aa38c-df33-4336-9b66-c410f7d93cb3
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/c68aa38c-df33-4336-9b66-c410f7d93cb3.pid.haproxy
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID c68aa38c-df33-4336-9b66-c410f7d93cb3
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.521 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:44.521 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'env', 'PROCESS_TAG=haproxy-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c68aa38c-df33-4336-9b66-c410f7d93cb3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.678 239942 DEBUG nova.compute.manager [req-9d3c79e4-2489-48bf-af17-70f99f900322 req-67dafbb5-e1ef-4555-adb8-1eaaa07e6220 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received event network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.678 239942 DEBUG oslo_concurrency.lockutils [req-9d3c79e4-2489-48bf-af17-70f99f900322 req-67dafbb5-e1ef-4555-adb8-1eaaa07e6220 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.679 239942 DEBUG oslo_concurrency.lockutils [req-9d3c79e4-2489-48bf-af17-70f99f900322 req-67dafbb5-e1ef-4555-adb8-1eaaa07e6220 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.679 239942 DEBUG oslo_concurrency.lockutils [req-9d3c79e4-2489-48bf-af17-70f99f900322 req-67dafbb5-e1ef-4555-adb8-1eaaa07e6220 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.679 239942 DEBUG nova.compute.manager [req-9d3c79e4-2489-48bf-af17-70f99f900322 req-67dafbb5-e1ef-4555-adb8-1eaaa07e6220 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Processing event network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.973 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834744.97254, f50cfc01-3561-48d6-8426-5da90fc04271 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.974 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] VM Started (Lifecycle Event)#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.976 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.981 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:45:44 np0005603435 podman[248591]: 2026-01-31 04:45:44.888122085 +0000 UTC m=+0.031706940 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.985 239942 INFO nova.virt.libvirt.driver [-] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Instance spawned successfully.#033[00m
Jan 30 23:45:44 np0005603435 nova_compute[239938]: 2026-01-31 04:45:44.985 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.019 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.023 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.038 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.039 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.040 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.041 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.042 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.043 239942 DEBUG nova.virt.libvirt.driver [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.087 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.088 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834744.974014, f50cfc01-3561-48d6-8426-5da90fc04271 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.088 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:45:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 88 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 2.7 MiB/s wr, 111 op/s
Jan 30 23:45:45 np0005603435 podman[248591]: 2026-01-31 04:45:45.20814656 +0000 UTC m=+0.351731385 container create 7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.224 239942 INFO nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Took 8.73 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.225 239942 DEBUG nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.226 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.238 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834744.9804003, f50cfc01-3561-48d6-8426-5da90fc04271 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.238 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.313 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.318 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.336 239942 INFO nova.compute.manager [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Took 11.15 seconds to build instance.#033[00m
Jan 30 23:45:45 np0005603435 systemd[1]: Started libpod-conmon-7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e.scope.
Jan 30 23:45:45 np0005603435 podman[248610]: 2026-01-31 04:45:45.39839469 +0000 UTC m=+0.151298217 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:45:45 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:45:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc8541a5d371f2fd435ffc96c12b130cc0b0f374a19dfdcb7f4eb78f93eb484/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.478 239942 DEBUG oslo_concurrency.lockutils [None req-d8f12471-4386-4522-b614-7557ee43556f a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:45 np0005603435 podman[248591]: 2026-01-31 04:45:45.481880354 +0000 UTC m=+0.625465159 container init 7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 30 23:45:45 np0005603435 podman[248591]: 2026-01-31 04:45:45.490504183 +0000 UTC m=+0.634088968 container start 7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 30 23:45:45 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[248630]: [NOTICE]   (248636) : New worker (248638) forked
Jan 30 23:45:45 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[248630]: [NOTICE]   (248636) : Loading success.
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.566 239942 DEBUG nova.network.neutron [-] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.609 239942 INFO nova.compute.manager [-] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Took 1.69 seconds to deallocate network for instance.#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.695 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.696 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.775 239942 DEBUG oslo_concurrency.processutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:45 np0005603435 nova_compute[239938]: 2026-01-31 04:45:45.788 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:45:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043079118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.317 239942 DEBUG oslo_concurrency.processutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.324 239942 DEBUG nova.compute.provider_tree [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.375 239942 DEBUG nova.scheduler.client.report [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.674 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.776 239942 INFO nova.scheduler.client.report [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Deleted allocations for instance a4cae87c-b7f1-42ce-836c-8effc2fd4de5#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.901 239942 DEBUG nova.compute.manager [req-3a30dd9f-7470-47f9-9bd2-ea4adea83c60 req-23a1f3d7-d03c-4f1c-8af6-568549af6d49 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Received event network-vif-deleted-e2a2845b-61f2-4c1a-ab7e-89ce08066e21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.902 239942 DEBUG nova.compute.manager [req-3a30dd9f-7470-47f9-9bd2-ea4adea83c60 req-23a1f3d7-d03c-4f1c-8af6-568549af6d49 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received event network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.903 239942 DEBUG oslo_concurrency.lockutils [req-3a30dd9f-7470-47f9-9bd2-ea4adea83c60 req-23a1f3d7-d03c-4f1c-8af6-568549af6d49 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.903 239942 DEBUG oslo_concurrency.lockutils [req-3a30dd9f-7470-47f9-9bd2-ea4adea83c60 req-23a1f3d7-d03c-4f1c-8af6-568549af6d49 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.904 239942 DEBUG oslo_concurrency.lockutils [req-3a30dd9f-7470-47f9-9bd2-ea4adea83c60 req-23a1f3d7-d03c-4f1c-8af6-568549af6d49 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.904 239942 DEBUG nova.compute.manager [req-3a30dd9f-7470-47f9-9bd2-ea4adea83c60 req-23a1f3d7-d03c-4f1c-8af6-568549af6d49 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] No waiting events found dispatching network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:45:46 np0005603435 nova_compute[239938]: 2026-01-31 04:45:46.905 239942 WARNING nova.compute.manager [req-3a30dd9f-7470-47f9-9bd2-ea4adea83c60 req-23a1f3d7-d03c-4f1c-8af6-568549af6d49 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received unexpected event network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:45:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 711 KiB/s rd, 1.2 MiB/s wr, 146 op/s
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.315 239942 DEBUG oslo_concurrency.lockutils [None req-1181fd15-f0aa-4f41-b19b-7fd41da38495 b64529ecf0d54f718c07683e4fe74bc1 293dba0b4ad14f1cb4a3b761ad5fd07a - - default default] Lock "a4cae87c-b7f1-42ce-836c-8effc2fd4de5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.523 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "f50cfc01-3561-48d6-8426-5da90fc04271" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.523 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.524 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.524 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.524 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.525 239942 INFO nova.compute.manager [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Terminating instance#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.527 239942 DEBUG nova.compute.manager [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:45:47 np0005603435 kernel: tap14b721ad-6a (unregistering): left promiscuous mode
Jan 30 23:45:47 np0005603435 NetworkManager[49097]: <info>  [1769834747.6384] device (tap14b721ad-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.637 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:47Z|00040|binding|INFO|Releasing lport 14b721ad-6ad1-4224-bc45-cccbe4643cd9 from this chassis (sb_readonly=0)
Jan 30 23:45:47 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:47Z|00041|binding|INFO|Setting lport 14b721ad-6ad1-4224-bc45-cccbe4643cd9 down in Southbound
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.646 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 ovn_controller[145670]: 2026-01-31T04:45:47Z|00042|binding|INFO|Removing iface tap14b721ad-6a ovn-installed in OVS
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.653 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.660 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 30 23:45:47 np0005603435 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 3.161s CPU time.
Jan 30 23:45:47 np0005603435 systemd-machined[208030]: Machine qemu-2-instance-00000002 terminated.
Jan 30 23:45:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:47.694 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:71:12 10.100.0.3'], port_security=['fa:16:3e:6d:71:12 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f50cfc01-3561-48d6-8426-5da90fc04271', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bdc8fbcac3b419ca374be1c490a20e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f73757ab-8ff2-4654-b537-c05855ab04c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e75c9d70-34ab-45c9-8a82-90b4b0f4bff4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=14b721ad-6ad1-4224-bc45-cccbe4643cd9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:45:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:47.695 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 14b721ad-6ad1-4224-bc45-cccbe4643cd9 in datapath c68aa38c-df33-4336-9b66-c410f7d93cb3 unbound from our chassis#033[00m
Jan 30 23:45:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:47.696 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c68aa38c-df33-4336-9b66-c410f7d93cb3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:45:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:47.697 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe7fc96-7f2b-492f-9b33-c392db559869]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:47.697 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 namespace which is not needed anymore#033[00m
Jan 30 23:45:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.742 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.745 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.754 239942 INFO nova.virt.libvirt.driver [-] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Instance destroyed successfully.#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.754 239942 DEBUG nova.objects.instance [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lazy-loading 'resources' on Instance uuid f50cfc01-3561-48d6-8426-5da90fc04271 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:45:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 30 23:45:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.847 239942 DEBUG nova.virt.libvirt.vif [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-985104523',display_name='tempest-VolumesActionsTest-instance-985104523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-985104523',id=2,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:45:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3bdc8fbcac3b419ca374be1c490a20e5',ramdisk_id='',reservation_id='r-h7z59rdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1818515503',owner_user_name='tempest-VolumesActionsTest-1818515503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:45:45Z,user_data=None,user_id='a60e5ee062304ce4b921d51a9d0be89f',uuid=f50cfc01-3561-48d6-8426-5da90fc04271,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.848 239942 DEBUG nova.network.os_vif_util [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converting VIF {"id": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "address": "fa:16:3e:6d:71:12", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b721ad-6a", "ovs_interfaceid": "14b721ad-6ad1-4224-bc45-cccbe4643cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.849 239942 DEBUG nova.network.os_vif_util [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:71:12,bridge_name='br-int',has_traffic_filtering=True,id=14b721ad-6ad1-4224-bc45-cccbe4643cd9,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b721ad-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.849 239942 DEBUG os_vif [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:71:12,bridge_name='br-int',has_traffic_filtering=True,id=14b721ad-6ad1-4224-bc45-cccbe4643cd9,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b721ad-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.851 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.851 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14b721ad-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.853 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.855 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:47 np0005603435 nova_compute[239938]: 2026-01-31 04:45:47.858 239942 INFO os_vif [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:71:12,bridge_name='br-int',has_traffic_filtering=True,id=14b721ad-6ad1-4224-bc45-cccbe4643cd9,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b721ad-6a')#033[00m
Jan 30 23:45:47 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[248630]: [NOTICE]   (248636) : haproxy version is 2.8.14-c23fe91
Jan 30 23:45:47 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[248630]: [NOTICE]   (248636) : path to executable is /usr/sbin/haproxy
Jan 30 23:45:47 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[248630]: [WARNING]  (248636) : Exiting Master process...
Jan 30 23:45:47 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[248630]: [WARNING]  (248636) : Exiting Master process...
Jan 30 23:45:47 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[248630]: [ALERT]    (248636) : Current worker (248638) exited with code 143 (Terminated)
Jan 30 23:45:47 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[248630]: [WARNING]  (248636) : All workers exited. Exiting... (0)
Jan 30 23:45:47 np0005603435 systemd[1]: libpod-7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e.scope: Deactivated successfully.
Jan 30 23:45:47 np0005603435 podman[248702]: 2026-01-31 04:45:47.925788438 +0000 UTC m=+0.146319027 container died 7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 30 23:45:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e-userdata-shm.mount: Deactivated successfully.
Jan 30 23:45:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-edc8541a5d371f2fd435ffc96c12b130cc0b0f374a19dfdcb7f4eb78f93eb484-merged.mount: Deactivated successfully.
Jan 30 23:45:48 np0005603435 podman[248702]: 2026-01-31 04:45:48.554199737 +0000 UTC m=+0.774730326 container cleanup 7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:45:48 np0005603435 systemd[1]: libpod-conmon-7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e.scope: Deactivated successfully.
Jan 30 23:45:48 np0005603435 podman[248748]: 2026-01-31 04:45:48.786694551 +0000 UTC m=+0.209982890 container remove 7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.792 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f42650-f5a5-4682-a547-9374a002c676]: (4, ('Sat Jan 31 04:45:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 (7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e)\n7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e\nSat Jan 31 04:45:48 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 (7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e)\n7379c64eaf0183318266f92796af5db690896736c69df5bd1dff9ddd22ef9a8e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.795 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9d712bf6-f0b2-438e-8f00-74dc29ff6f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.796 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc68aa38c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:45:48 np0005603435 nova_compute[239938]: 2026-01-31 04:45:48.803 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:48 np0005603435 kernel: tapc68aa38c-d0: left promiscuous mode
Jan 30 23:45:48 np0005603435 nova_compute[239938]: 2026-01-31 04:45:48.805 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:48 np0005603435 nova_compute[239938]: 2026-01-31 04:45:48.811 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.810 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[be3c36e3-c3d5-4f90-b211-f19b925f793d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.827 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0b32cf60-8f3d-4ba7-b769-1157df12ca91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.828 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[72a5c242-6b4b-4b69-b3bb-b36be23c926f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.845 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1ec698-609e-4a63-82fd-a38c575b3756]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 379866, 'reachable_time': 34010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248767, 'error': None, 'target': 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.848 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:45:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:48.848 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f37381-642e-4958-aed1-91ca9f31d101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:45:48 np0005603435 systemd[1]: run-netns-ovnmeta\x2dc68aa38c\x2ddf33\x2d4336\x2d9b66\x2dc410f7d93cb3.mount: Deactivated successfully.
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.016 239942 DEBUG nova.compute.manager [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received event network-vif-unplugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.016 239942 DEBUG oslo_concurrency.lockutils [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.017 239942 DEBUG oslo_concurrency.lockutils [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.017 239942 DEBUG oslo_concurrency.lockutils [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.017 239942 DEBUG nova.compute.manager [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] No waiting events found dispatching network-vif-unplugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.018 239942 DEBUG nova.compute.manager [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received event network-vif-unplugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.018 239942 DEBUG nova.compute.manager [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received event network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.018 239942 DEBUG oslo_concurrency.lockutils [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.019 239942 DEBUG oslo_concurrency.lockutils [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.019 239942 DEBUG oslo_concurrency.lockutils [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.019 239942 DEBUG nova.compute.manager [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] No waiting events found dispatching network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.019 239942 WARNING nova.compute.manager [req-abfb79aa-4846-4fe7-a5ab-230867d11950 req-b45c9ea2-9d8a-409d-af18-f69b3184914d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received unexpected event network-vif-plugged-14b721ad-6ad1-4224-bc45-cccbe4643cd9 for instance with vm_state active and task_state deleting.#033[00m
Jan 30 23:45:49 np0005603435 nova_compute[239938]: 2026-01-31 04:45:49.029 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 24 KiB/s wr, 167 op/s
Jan 30 23:45:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 161 op/s
Jan 30 23:45:52 np0005603435 nova_compute[239938]: 2026-01-31 04:45:52.855 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.022936) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834753023004, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1269, "num_deletes": 257, "total_data_size": 1635664, "memory_usage": 1664720, "flush_reason": "Manual Compaction"}
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834753076692, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1615242, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20132, "largest_seqno": 21400, "table_properties": {"data_size": 1609038, "index_size": 3407, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14023, "raw_average_key_size": 20, "raw_value_size": 1596275, "raw_average_value_size": 2368, "num_data_blocks": 151, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769834671, "oldest_key_time": 1769834671, "file_creation_time": 1769834753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 53822 microseconds, and 7540 cpu microseconds.
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.076764) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1615242 bytes OK
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.076789) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.106744) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.106792) EVENT_LOG_v1 {"time_micros": 1769834753106780, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.106821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1629740, prev total WAL file size 1629740, number of live WAL files 2.
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.107586) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1577KB)], [47(7560KB)]
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834753107632, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9357428, "oldest_snapshot_seqno": -1}
Jan 30 23:45:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 54 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 165 op/s
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4558 keys, 7599114 bytes, temperature: kUnknown
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834753307409, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7599114, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7566892, "index_size": 19696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 113171, "raw_average_key_size": 24, "raw_value_size": 7482779, "raw_average_value_size": 1641, "num_data_blocks": 816, "num_entries": 4558, "num_filter_entries": 4558, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769834753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.307697) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7599114 bytes
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.335176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.8 rd, 38.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.4 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.5) write-amplify(4.7) OK, records in: 5084, records dropped: 526 output_compression: NoCompression
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.335209) EVENT_LOG_v1 {"time_micros": 1769834753335193, "job": 24, "event": "compaction_finished", "compaction_time_micros": 199867, "compaction_time_cpu_micros": 18100, "output_level": 6, "num_output_files": 1, "total_output_size": 7599114, "num_input_records": 5084, "num_output_records": 4558, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834753335659, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834753337279, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.107480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.337397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.337406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.337411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.337415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:45:53 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:45:53.337420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:45:54 np0005603435 nova_compute[239938]: 2026-01-31 04:45:54.031 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 30 23:45:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 30 23:45:54 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 30 23:45:54 np0005603435 nova_compute[239938]: 2026-01-31 04:45:54.669 239942 INFO nova.virt.libvirt.driver [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Deleting instance files /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271_del#033[00m
Jan 30 23:45:54 np0005603435 nova_compute[239938]: 2026-01-31 04:45:54.670 239942 INFO nova.virt.libvirt.driver [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Deletion of /var/lib/nova/instances/f50cfc01-3561-48d6-8426-5da90fc04271_del complete#033[00m
Jan 30 23:45:54 np0005603435 nova_compute[239938]: 2026-01-31 04:45:54.892 239942 INFO nova.compute.manager [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Took 7.37 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:45:54 np0005603435 nova_compute[239938]: 2026-01-31 04:45:54.893 239942 DEBUG oslo.service.loopingcall [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:45:54 np0005603435 nova_compute[239938]: 2026-01-31 04:45:54.894 239942 DEBUG nova.compute.manager [-] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:45:54 np0005603435 nova_compute[239938]: 2026-01-31 04:45:54.895 239942 DEBUG nova.network.neutron [-] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:45:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 154 op/s
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.646 239942 DEBUG nova.network.neutron [-] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:45:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3723935172' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3723935172' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.685 239942 INFO nova.compute.manager [-] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Took 0.79 seconds to deallocate network for instance.#033[00m
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.691 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769834740.6874614, a4cae87c-b7f1-42ce-836c-8effc2fd4de5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.692 239942 INFO nova.compute.manager [-] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.828 239942 DEBUG nova.compute.manager [None req-7999d4c1-d777-4ddf-8390-683b2f57aa86 - - - - - -] [instance: a4cae87c-b7f1-42ce-836c-8effc2fd4de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.829 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.830 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.831 239942 DEBUG nova.compute.manager [req-5381ad25-858b-48de-b072-ccbfca0a7750 req-516d2633-8ff4-42ba-86e3-cd7837f23f4e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Received event network-vif-deleted-14b721ad-6ad1-4224-bc45-cccbe4643cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:45:55 np0005603435 nova_compute[239938]: 2026-01-31 04:45:55.872 239942 DEBUG oslo_concurrency.processutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:45:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:55.909 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:45:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:55.910 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:45:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:45:55.910 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:45:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 30 23:45:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 30 23:45:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 30 23:45:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:45:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129447679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:45:56 np0005603435 nova_compute[239938]: 2026-01-31 04:45:56.425 239942 DEBUG oslo_concurrency.processutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:45:56 np0005603435 nova_compute[239938]: 2026-01-31 04:45:56.432 239942 DEBUG nova.compute.provider_tree [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:45:56 np0005603435 nova_compute[239938]: 2026-01-31 04:45:56.453 239942 DEBUG nova.scheduler.client.report [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:45:56 np0005603435 nova_compute[239938]: 2026-01-31 04:45:56.482 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:56 np0005603435 nova_compute[239938]: 2026-01-31 04:45:56.518 239942 INFO nova.scheduler.client.report [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Deleted allocations for instance f50cfc01-3561-48d6-8426-5da90fc04271#033[00m
Jan 30 23:45:56 np0005603435 nova_compute[239938]: 2026-01-31 04:45:56.591 239942 DEBUG oslo_concurrency.lockutils [None req-665e912f-4199-4252-8c97-310ff8cd1c9c a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "f50cfc01-3561-48d6-8426-5da90fc04271" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:45:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 687 KiB/s rd, 3.6 KiB/s wr, 109 op/s
Jan 30 23:45:57 np0005603435 nova_compute[239938]: 2026-01-31 04:45:57.860 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:45:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2874213677' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:45:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:45:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2874213677' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:45:59 np0005603435 nova_compute[239938]: 2026-01-31 04:45:59.034 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:45:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 4.1 KiB/s wr, 100 op/s
Jan 30 23:46:00 np0005603435 nova_compute[239938]: 2026-01-31 04:46:00.279 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:00 np0005603435 nova_compute[239938]: 2026-01-31 04:46:00.279 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:00 np0005603435 nova_compute[239938]: 2026-01-31 04:46:00.303 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:46:00 np0005603435 nova_compute[239938]: 2026-01-31 04:46:00.375 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:00 np0005603435 nova_compute[239938]: 2026-01-31 04:46:00.376 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:00 np0005603435 nova_compute[239938]: 2026-01-31 04:46:00.386 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:46:00 np0005603435 nova_compute[239938]: 2026-01-31 04:46:00.386 239942 INFO nova.compute.claims [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:46:00 np0005603435 nova_compute[239938]: 2026-01-31 04:46:00.503 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133415278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.036 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.043 239942 DEBUG nova.compute.provider_tree [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.076 239942 DEBUG nova.scheduler.client.report [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 30 23:46:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 KiB/s wr, 67 op/s
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.218 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.219 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.545 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.545 239942 DEBUG nova.network.neutron [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3553138704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:46:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3553138704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.639 239942 INFO nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.748 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:46:01 np0005603435 nova_compute[239938]: 2026-01-31 04:46:01.807 239942 DEBUG nova.policy [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a60e5ee062304ce4b921d51a9d0be89f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3bdc8fbcac3b419ca374be1c490a20e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.065 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.067 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.067 239942 INFO nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Creating image(s)#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.088 239942 DEBUG nova.storage.rbd_utils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.114 239942 DEBUG nova.storage.rbd_utils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.141 239942 DEBUG nova.storage.rbd_utils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.148 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.230 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.232 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.233 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.233 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.258 239942 DEBUG nova.storage.rbd_utils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.261 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.571 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.645 239942 DEBUG nova.storage.rbd_utils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] resizing rbd image bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.729 239942 DEBUG nova.objects.instance [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lazy-loading 'migration_context' on Instance uuid bbb79b19-b4e9-4b82-86a3-f44ba87a2877 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.753 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769834747.7526238, f50cfc01-3561-48d6-8426-5da90fc04271 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.753 239942 INFO nova.compute.manager [-] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.756 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.756 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Ensure instance console log exists: /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.756 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.757 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.757 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.851 239942 DEBUG nova.compute.manager [None req-252ceb8a-3fe8-4950-99b3-fe95c93ba15e - - - - - -] [instance: f50cfc01-3561-48d6-8426-5da90fc04271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:02 np0005603435 nova_compute[239938]: 2026-01-31 04:46:02.864 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:03 np0005603435 nova_compute[239938]: 2026-01-31 04:46:03.036 239942 DEBUG nova.network.neutron [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Successfully created port: 1c038eff-8eff-4626-9560-fc2342c80f86 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:46:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.5 KiB/s wr, 67 op/s
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.036 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.565 239942 DEBUG nova.network.neutron [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Successfully updated port: 1c038eff-8eff-4626-9560-fc2342c80f86 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.653 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "refresh_cache-bbb79b19-b4e9-4b82-86a3-f44ba87a2877" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.654 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquired lock "refresh_cache-bbb79b19-b4e9-4b82-86a3-f44ba87a2877" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.654 239942 DEBUG nova.network.neutron [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.829 239942 DEBUG nova.network.neutron [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:46:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:46:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3519876779' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:46:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:46:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3519876779' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.912 239942 DEBUG nova.compute.manager [req-a20f1b37-fe4b-41b7-a01d-024b27f74657 req-cae42fe5-abb0-40c2-95e5-2922cbf6c363 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received event network-changed-1c038eff-8eff-4626-9560-fc2342c80f86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.913 239942 DEBUG nova.compute.manager [req-a20f1b37-fe4b-41b7-a01d-024b27f74657 req-cae42fe5-abb0-40c2-95e5-2922cbf6c363 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Refreshing instance network info cache due to event network-changed-1c038eff-8eff-4626-9560-fc2342c80f86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:46:04 np0005603435 nova_compute[239938]: 2026-01-31 04:46:04.913 239942 DEBUG oslo_concurrency.lockutils [req-a20f1b37-fe4b-41b7-a01d-024b27f74657 req-cae42fe5-abb0-40c2-95e5-2922cbf6c363 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-bbb79b19-b4e9-4b82-86a3-f44ba87a2877" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:46:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 46 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 275 KiB/s wr, 65 op/s
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.631 239942 DEBUG nova.network.neutron [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Updating instance_info_cache with network_info: [{"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.722 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Releasing lock "refresh_cache-bbb79b19-b4e9-4b82-86a3-f44ba87a2877" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.723 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Instance network_info: |[{"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.723 239942 DEBUG oslo_concurrency.lockutils [req-a20f1b37-fe4b-41b7-a01d-024b27f74657 req-cae42fe5-abb0-40c2-95e5-2922cbf6c363 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-bbb79b19-b4e9-4b82-86a3-f44ba87a2877" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.723 239942 DEBUG nova.network.neutron [req-a20f1b37-fe4b-41b7-a01d-024b27f74657 req-cae42fe5-abb0-40c2-95e5-2922cbf6c363 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Refreshing network info cache for port 1c038eff-8eff-4626-9560-fc2342c80f86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.726 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Start _get_guest_xml network_info=[{"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.729 239942 WARNING nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.733 239942 DEBUG nova.virt.libvirt.host [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.733 239942 DEBUG nova.virt.libvirt.host [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.736 239942 DEBUG nova.virt.libvirt.host [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.737 239942 DEBUG nova.virt.libvirt.host [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.737 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.737 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.738 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.738 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.738 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.738 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.739 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.739 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.739 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.739 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.740 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.740 239942 DEBUG nova.virt.hardware [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.742 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.890 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 30 23:46:05 np0005603435 nova_compute[239938]: 2026-01-31 04:46:05.936 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.049 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquiring lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.050 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:46:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/736364022' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.271 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.299 239942 DEBUG nova.storage.rbd_utils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.303 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.321 239942 DEBUG nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:46:06
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.meta']
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.773 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.773 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.782 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.782 239942 INFO nova.compute.claims [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:46:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:46:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779579082' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.855 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.857 239942 DEBUG nova.virt.libvirt.vif [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:45:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-804384806',display_name='tempest-VolumesActionsTest-instance-804384806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-804384806',id=3,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3bdc8fbcac3b419ca374be1c490a20e5',ramdisk_id='',reservation_id='r-s539x4de',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1818515503',owner_user_name='tempest-VolumesActionsTest-1818515503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:46:01Z,user_data=None,user_id='a60e5ee062304ce4b921d51a9d0be89f',uuid=bbb79b19-b4e9-4b82-86a3-f44ba87a2877,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.857 239942 DEBUG nova.network.os_vif_util [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converting VIF {"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.859 239942 DEBUG nova.network.os_vif_util [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:b5:2a,bridge_name='br-int',has_traffic_filtering=True,id=1c038eff-8eff-4626-9560-fc2342c80f86,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c038eff-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.860 239942 DEBUG nova.objects.instance [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid bbb79b19-b4e9-4b82-86a3-f44ba87a2877 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:06 np0005603435 nova_compute[239938]: 2026-01-31 04:46:06.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:46:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 69 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.2 MiB/s wr, 67 op/s
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:46:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.241 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <uuid>bbb79b19-b4e9-4b82-86a3-f44ba87a2877</uuid>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <name>instance-00000003</name>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesActionsTest-instance-804384806</nova:name>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:46:05</nova:creationTime>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <nova:user uuid="a60e5ee062304ce4b921d51a9d0be89f">tempest-VolumesActionsTest-1818515503-project-member</nova:user>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <nova:project uuid="3bdc8fbcac3b419ca374be1c490a20e5">tempest-VolumesActionsTest-1818515503</nova:project>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <nova:port uuid="1c038eff-8eff-4626-9560-fc2342c80f86">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <entry name="serial">bbb79b19-b4e9-4b82-86a3-f44ba87a2877</entry>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <entry name="uuid">bbb79b19-b4e9-4b82-86a3-f44ba87a2877</entry>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk.config">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:62:b5:2a"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <target dev="tap1c038eff-8e"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877/console.log" append="off"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:46:07 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:46:07 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:46:07 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:46:07 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.242 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Preparing to wait for external event network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.242 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.242 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.243 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.243 239942 DEBUG nova.virt.libvirt.vif [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:45:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-804384806',display_name='tempest-VolumesActionsTest-instance-804384806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-804384806',id=3,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3bdc8fbcac3b419ca374be1c490a20e5',ramdisk_id='',reservation_id='r-s539x4de',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1818515503',owner_user_name='tempest-VolumesActionsTest-1818515503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:46:01Z,user_data=None,user_id='a60e5ee062304ce4b921d51a9d0be89f',uuid=bbb79b19-b4e9-4b82-86a3-f44ba87a2877,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.244 239942 DEBUG nova.network.os_vif_util [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converting VIF {"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.244 239942 DEBUG nova.network.os_vif_util [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:b5:2a,bridge_name='br-int',has_traffic_filtering=True,id=1c038eff-8eff-4626-9560-fc2342c80f86,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c038eff-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.245 239942 DEBUG os_vif [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:b5:2a,bridge_name='br-int',has_traffic_filtering=True,id=1c038eff-8eff-4626-9560-fc2342c80f86,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c038eff-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.245 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.246 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.246 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.249 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.250 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c038eff-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.250 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c038eff-8e, col_values=(('external_ids', {'iface-id': '1c038eff-8eff-4626-9560-fc2342c80f86', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:b5:2a', 'vm-uuid': 'bbb79b19-b4e9-4b82-86a3-f44ba87a2877'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.252 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:07 np0005603435 NetworkManager[49097]: <info>  [1769834767.2532] manager: (tap1c038eff-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.255 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.257 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.259 239942 INFO os_vif [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:b5:2a,bridge_name='br-int',has_traffic_filtering=True,id=1c038eff-8eff-4626-9560-fc2342c80f86,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c038eff-8e')#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.592 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.592 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.593 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] No VIF found with MAC fa:16:3e:62:b5:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.594 239942 INFO nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Using config drive#033[00m
Jan 30 23:46:07 np0005603435 nova_compute[239938]: 2026-01-31 04:46:07.625 239942 DEBUG nova.storage.rbd_utils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:08 np0005603435 nova_compute[239938]: 2026-01-31 04:46:08.587 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:08 np0005603435 nova_compute[239938]: 2026-01-31 04:46:08.920 239942 INFO nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Creating config drive at /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877/disk.config#033[00m
Jan 30 23:46:08 np0005603435 nova_compute[239938]: 2026-01-31 04:46:08.924 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpicig9tga execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.037 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.040 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpicig9tga" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.107 239942 DEBUG nova.storage.rbd_utils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] rbd image bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.111 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877/disk.config bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:46:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886955559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.132 239942 DEBUG nova.network.neutron [req-a20f1b37-fe4b-41b7-a01d-024b27f74657 req-cae42fe5-abb0-40c2-95e5-2922cbf6c363 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Updated VIF entry in instance network info cache for port 1c038eff-8eff-4626-9560-fc2342c80f86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.133 239942 DEBUG nova.network.neutron [req-a20f1b37-fe4b-41b7-a01d-024b27f74657 req-cae42fe5-abb0-40c2-95e5-2922cbf6c363 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Updating instance_info_cache with network_info: [{"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.147 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.154 239942 DEBUG nova.compute.provider_tree [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:46:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.179 239942 DEBUG oslo_concurrency.lockutils [req-a20f1b37-fe4b-41b7-a01d-024b27f74657 req-cae42fe5-abb0-40c2-95e5-2922cbf6c363 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-bbb79b19-b4e9-4b82-86a3-f44ba87a2877" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.181 239942 DEBUG nova.scheduler.client.report [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.285 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.286 239942 DEBUG nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.382 239942 DEBUG nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.382 239942 DEBUG nova.network.neutron [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.574 239942 INFO nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.659 239942 DEBUG nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.774 239942 DEBUG nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.776 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.777 239942 INFO nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Creating image(s)#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.805 239942 DEBUG nova.storage.rbd_utils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] rbd image 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.837 239942 DEBUG nova.storage.rbd_utils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] rbd image 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.869 239942 DEBUG nova.storage.rbd_utils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] rbd image 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.873 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.923 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.925 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.926 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.926 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.961 239942 DEBUG nova.storage.rbd_utils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] rbd image 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:09 np0005603435 nova_compute[239938]: 2026-01-31 04:46:09.966 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.095 239942 DEBUG nova.network.neutron [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.096 239942 DEBUG nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.234 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.235 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.236 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.236 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.264 239942 DEBUG oslo_concurrency.processutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877/disk.config bbb79b19-b4e9-4b82-86a3-f44ba87a2877_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.264 239942 INFO nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Deleting local config drive /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877/disk.config because it was imported into RBD.#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.293 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.294 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.294 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.295 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:10 np0005603435 kernel: tap1c038eff-8e: entered promiscuous mode
Jan 30 23:46:10 np0005603435 ovn_controller[145670]: 2026-01-31T04:46:10Z|00043|binding|INFO|Claiming lport 1c038eff-8eff-4626-9560-fc2342c80f86 for this chassis.
Jan 30 23:46:10 np0005603435 ovn_controller[145670]: 2026-01-31T04:46:10Z|00044|binding|INFO|1c038eff-8eff-4626-9560-fc2342c80f86: Claiming fa:16:3e:62:b5:2a 10.100.0.5
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.325 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:b5:2a 10.100.0.5'], port_security=['fa:16:3e:62:b5:2a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bbb79b19-b4e9-4b82-86a3-f44ba87a2877', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bdc8fbcac3b419ca374be1c490a20e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f73757ab-8ff2-4654-b537-c05855ab04c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e75c9d70-34ab-45c9-8a82-90b4b0f4bff4, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=1c038eff-8eff-4626-9560-fc2342c80f86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.328 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 1c038eff-8eff-4626-9560-fc2342c80f86 in datapath c68aa38c-df33-4336-9b66-c410f7d93cb3 bound to our chassis#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.330 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c68aa38c-df33-4336-9b66-c410f7d93cb3#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.345 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:10 np0005603435 NetworkManager[49097]: <info>  [1769834770.3475] manager: (tap1c038eff-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.355 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[68561ead-4df0-4b0a-bd64-044d5d1e4c49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.356 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc68aa38c-d1 in ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.359 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc68aa38c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.359 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[34fcd830-06f5-4b36-a90c-2680716390bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_controller[145670]: 2026-01-31T04:46:10Z|00045|binding|INFO|Setting lport 1c038eff-8eff-4626-9560-fc2342c80f86 ovn-installed in OVS
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.360 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bf970b22-b0ce-4705-8176-ddeb99dbe7ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_controller[145670]: 2026-01-31T04:46:10Z|00046|binding|INFO|Setting lport 1c038eff-8eff-4626-9560-fc2342c80f86 up in Southbound
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.363 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.366 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:10 np0005603435 systemd-udevd[249230]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.369 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[670cdc0b-5a7e-421f-a410-f51f3403d6b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 systemd-machined[208030]: New machine qemu-3-instance-00000003.
Jan 30 23:46:10 np0005603435 NetworkManager[49097]: <info>  [1769834770.3793] device (tap1c038eff-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:46:10 np0005603435 NetworkManager[49097]: <info>  [1769834770.3802] device (tap1c038eff-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:46:10 np0005603435 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.397 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe159d8-26d0-46ac-98f0-2cbb086e02e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.424 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[4c328446-ef46-4d66-a4e6-7543d365a211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 NetworkManager[49097]: <info>  [1769834770.4314] manager: (tapc68aa38c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.430 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[16725d59-598d-4378-a889-bc936cbd5e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.459 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[64677d74-e3db-423f-a95c-a2fdbd222147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.461 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[55a62b90-cea4-4b25-a2b5-c819b29f4347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 NetworkManager[49097]: <info>  [1769834770.4831] device (tapc68aa38c-d0): carrier: link connected
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.487 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[b5277f35-e1e1-4d4f-9087-2d549c1a92a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.504 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2261b560-63ea-4caf-83a0-8ca5c54ba76c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc68aa38c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:e3:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 382488, 'reachable_time': 27906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249263, 'error': None, 'target': 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.517 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc3a350-282c-4c93-844b-f8180460aa18]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:e350'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 382488, 'tstamp': 382488}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249264, 'error': None, 'target': 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.531 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdcd5f8-0552-49cd-9ddc-cac3ada86081]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc68aa38c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:e3:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 382488, 'reachable_time': 27906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249265, 'error': None, 'target': 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.556 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4a03149c-d435-4f9d-a0e8-638ac5fe2ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.621 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[11cce5c7-6e7e-4fe6-88a2-494111e8de3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.627 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc68aa38c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.628 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.628 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc68aa38c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.630 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:10 np0005603435 NetworkManager[49097]: <info>  [1769834770.6311] manager: (tapc68aa38c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 30 23:46:10 np0005603435 kernel: tapc68aa38c-d0: entered promiscuous mode
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.632 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.635 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc68aa38c-d0, col_values=(('external_ids', {'iface-id': 'e4623bae-4ba2-4934-a8d4-cf715fe5be3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.637 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:10 np0005603435 ovn_controller[145670]: 2026-01-31T04:46:10Z|00047|binding|INFO|Releasing lport e4623bae-4ba2-4934-a8d4-cf715fe5be3c from this chassis (sb_readonly=0)
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.646 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.647 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.648 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c68aa38c-df33-4336-9b66-c410f7d93cb3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c68aa38c-df33-4336-9b66-c410f7d93cb3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.650 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cd07eb9c-fe7d-4e87-85e4-85b70afee48d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.650 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-c68aa38c-df33-4336-9b66-c410f7d93cb3
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/c68aa38c-df33-4336-9b66-c410f7d93cb3.pid.haproxy
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID c68aa38c-df33-4336-9b66-c410f7d93cb3
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:46:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:10.651 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'env', 'PROCESS_TAG=haproxy-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c68aa38c-df33-4336-9b66-c410f7d93cb3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.777 239942 DEBUG nova.compute.manager [req-534821a4-691d-48df-bb8f-550949679fe3 req-1b6d1a3a-f45e-4f08-8c90-9dcc5a6742af c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received event network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.778 239942 DEBUG oslo_concurrency.lockutils [req-534821a4-691d-48df-bb8f-550949679fe3 req-1b6d1a3a-f45e-4f08-8c90-9dcc5a6742af c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.778 239942 DEBUG oslo_concurrency.lockutils [req-534821a4-691d-48df-bb8f-550949679fe3 req-1b6d1a3a-f45e-4f08-8c90-9dcc5a6742af c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.778 239942 DEBUG oslo_concurrency.lockutils [req-534821a4-691d-48df-bb8f-550949679fe3 req-1b6d1a3a-f45e-4f08-8c90-9dcc5a6742af c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.778 239942 DEBUG nova.compute.manager [req-534821a4-691d-48df-bb8f-550949679fe3 req-1b6d1a3a-f45e-4f08-8c90-9dcc5a6742af c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Processing event network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:46:10 np0005603435 nova_compute[239938]: 2026-01-31 04:46:10.893 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.007 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834771.0070407, bbb79b19-b4e9-4b82-86a3-f44ba87a2877 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.008 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] VM Started (Lifecycle Event)#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.010 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.015 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.019 239942 INFO nova.virt.libvirt.driver [-] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Instance spawned successfully.#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.019 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:46:11 np0005603435 podman[249342]: 2026-01-31 04:46:11.038318081 +0000 UTC m=+0.040324688 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:46:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.228 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.229 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.229 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.230 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.231 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.249 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.255 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.256 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.257 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.258 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.259 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.260 239942 DEBUG nova.virt.libvirt.driver [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.267 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.347 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.347 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834771.007322, bbb79b19-b4e9-4b82-86a3-f44ba87a2877 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.347 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.403 239942 INFO nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Took 9.34 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.404 239942 DEBUG nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.421 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.425 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834771.0142868, bbb79b19-b4e9-4b82-86a3-f44ba87a2877 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.425 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.537 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.541 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.612 239942 INFO nova.compute.manager [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Took 11.26 seconds to build instance.#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.691 239942 DEBUG oslo_concurrency.lockutils [None req-8b859e0c-50ac-41aa-927a-894f189c23b3 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:46:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4121998970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.767 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:11 np0005603435 podman[249342]: 2026-01-31 04:46:11.889940518 +0000 UTC m=+0.891947085 container create 73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.964 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:46:11 np0005603435 nova_compute[239938]: 2026-01-31 04:46:11.966 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:46:12 np0005603435 systemd[1]: Started libpod-conmon-73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8.scope.
Jan 30 23:46:12 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:46:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b14a747677299ec93e693235a0accda250455f0a635a6b7c2c0820287739239/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:12 np0005603435 podman[249342]: 2026-01-31 04:46:12.146095926 +0000 UTC m=+1.148102453 container init 73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:46:12 np0005603435 podman[249342]: 2026-01-31 04:46:12.152133152 +0000 UTC m=+1.154139669 container start 73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:46:12 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[249380]: [NOTICE]   (249384) : New worker (249386) forked
Jan 30 23:46:12 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[249380]: [NOTICE]   (249384) : Loading success.
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.177 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.178 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4549MB free_disk=59.96736532822251GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.178 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.179 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.200 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.253 239942 DEBUG nova.storage.rbd_utils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] resizing rbd image 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.322 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.379 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance bbb79b19-b4e9-4b82-86a3-f44ba87a2877 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.380 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 0b2b7bfd-3a9d-431e-911f-92e2084191c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.380 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.381 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.439 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.491 239942 DEBUG nova.objects.instance [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b2b7bfd-3a9d-431e-911f-92e2084191c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.525 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.526 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Ensure instance console log exists: /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.527 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.527 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.528 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.530 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.536 239942 WARNING nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.542 239942 DEBUG nova.virt.libvirt.host [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.543 239942 DEBUG nova.virt.libvirt.host [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.547 239942 DEBUG nova.virt.libvirt.host [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.547 239942 DEBUG nova.virt.libvirt.host [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.548 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.549 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.550 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.551 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.551 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.552 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.552 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.553 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.553 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.554 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.555 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.555 239942 DEBUG nova.virt.hardware [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:46:12 np0005603435 nova_compute[239938]: 2026-01-31 04:46:12.560 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:46:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2086835084' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:46:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:46:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2086835084' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:46:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:46:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096943160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.001 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.007 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.054 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:46:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:46:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406658906' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.132 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.161 239942 DEBUG nova.storage.rbd_utils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] rbd image 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.166 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 103 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 869 KiB/s rd, 2.2 MiB/s wr, 113 op/s
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.252 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.254 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.255 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.290 239942 DEBUG nova.compute.manager [req-2c8001de-6385-4e1d-8ce0-63b388834a21 req-cf9f4a23-4418-43a3-b186-a867a80caf52 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received event network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.291 239942 DEBUG oslo_concurrency.lockutils [req-2c8001de-6385-4e1d-8ce0-63b388834a21 req-cf9f4a23-4418-43a3-b186-a867a80caf52 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.291 239942 DEBUG oslo_concurrency.lockutils [req-2c8001de-6385-4e1d-8ce0-63b388834a21 req-cf9f4a23-4418-43a3-b186-a867a80caf52 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.291 239942 DEBUG oslo_concurrency.lockutils [req-2c8001de-6385-4e1d-8ce0-63b388834a21 req-cf9f4a23-4418-43a3-b186-a867a80caf52 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.291 239942 DEBUG nova.compute.manager [req-2c8001de-6385-4e1d-8ce0-63b388834a21 req-cf9f4a23-4418-43a3-b186-a867a80caf52 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] No waiting events found dispatching network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.292 239942 WARNING nova.compute.manager [req-2c8001de-6385-4e1d-8ce0-63b388834a21 req-cf9f4a23-4418-43a3-b186-a867a80caf52 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received unexpected event network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:46:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:46:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3275800699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.707 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.709 239942 DEBUG nova.objects.instance [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b2b7bfd-3a9d-431e-911f-92e2084191c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:46:13 np0005603435 nova_compute[239938]: 2026-01-31 04:46:13.821 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <uuid>0b2b7bfd-3a9d-431e-911f-92e2084191c5</uuid>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <name>instance-00000004</name>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesNegativeTest-instance-586831511</nova:name>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:46:12</nova:creationTime>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <nova:user uuid="a52aa172a0ce4a7a9cafdbcfc941a80b">tempest-VolumesNegativeTest-1972900211-project-member</nova:user>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <nova:project uuid="d631da547e324145986193f504e136f8">tempest-VolumesNegativeTest-1972900211</nova:project>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <nova:ports/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <entry name="serial">0b2b7bfd-3a9d-431e-911f-92e2084191c5</entry>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <entry name="uuid">0b2b7bfd-3a9d-431e-911f-92e2084191c5</entry>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk.config">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5/console.log" append="off"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:46:13 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:46:13 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:46:13 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:46:13 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.040 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:14 np0005603435 podman[249551]: 2026-01-31 04:46:14.123377183 +0000 UTC m=+0.089818118 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.272 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.273 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.274 239942 INFO nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Using config drive#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.310 239942 DEBUG nova.storage.rbd_utils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] rbd image 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.347 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.524 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.525 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.526 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.527 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.561 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.564 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.564 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.565 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.565 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.568 239942 INFO nova.compute.manager [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Terminating instance#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.570 239942 DEBUG nova.compute.manager [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.584 239942 INFO nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Creating config drive at /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5/disk.config#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.591 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2jkeq0k0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:14 np0005603435 kernel: tap1c038eff-8e (unregistering): left promiscuous mode
Jan 30 23:46:14 np0005603435 NetworkManager[49097]: <info>  [1769834774.6445] device (tap1c038eff-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:46:14 np0005603435 ovn_controller[145670]: 2026-01-31T04:46:14Z|00048|binding|INFO|Releasing lport 1c038eff-8eff-4626-9560-fc2342c80f86 from this chassis (sb_readonly=0)
Jan 30 23:46:14 np0005603435 ovn_controller[145670]: 2026-01-31T04:46:14Z|00049|binding|INFO|Setting lport 1c038eff-8eff-4626-9560-fc2342c80f86 down in Southbound
Jan 30 23:46:14 np0005603435 ovn_controller[145670]: 2026-01-31T04:46:14Z|00050|binding|INFO|Removing iface tap1c038eff-8e ovn-installed in OVS
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.650 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.664 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:14 np0005603435 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 30 23:46:14 np0005603435 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 3.960s CPU time.
Jan 30 23:46:14 np0005603435 systemd-machined[208030]: Machine qemu-3-instance-00000003 terminated.
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.715 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2jkeq0k0" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.741 239942 DEBUG nova.storage.rbd_utils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] rbd image 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.745 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5/disk.config 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:14.754 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:b5:2a 10.100.0.5'], port_security=['fa:16:3e:62:b5:2a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bbb79b19-b4e9-4b82-86a3-f44ba87a2877', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bdc8fbcac3b419ca374be1c490a20e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f73757ab-8ff2-4654-b537-c05855ab04c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e75c9d70-34ab-45c9-8a82-90b4b0f4bff4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=1c038eff-8eff-4626-9560-fc2342c80f86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:46:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:14.756 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 1c038eff-8eff-4626-9560-fc2342c80f86 in datapath c68aa38c-df33-4336-9b66-c410f7d93cb3 unbound from our chassis#033[00m
Jan 30 23:46:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:14.758 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c68aa38c-df33-4336-9b66-c410f7d93cb3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:46:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:14.759 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[be82929e-3db7-4570-a4f5-0e17b8801649]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:14.760 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 namespace which is not needed anymore#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.803 239942 INFO nova.virt.libvirt.driver [-] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Instance destroyed successfully.#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.804 239942 DEBUG nova.objects.instance [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lazy-loading 'resources' on Instance uuid bbb79b19-b4e9-4b82-86a3-f44ba87a2877 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.853 239942 DEBUG nova.virt.libvirt.vif [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:45:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-804384806',display_name='tempest-VolumesActionsTest-instance-804384806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-804384806',id=3,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:46:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3bdc8fbcac3b419ca374be1c490a20e5',ramdisk_id='',reservation_id='r-s539x4de',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1818515503',owner_user_name='tempest-VolumesActionsTest-1818515503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:46:11Z,user_data=None,user_id='a60e5ee062304ce4b921d51a9d0be89f',uuid=bbb79b19-b4e9-4b82-86a3-f44ba87a2877,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.854 239942 DEBUG nova.network.os_vif_util [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converting VIF {"id": "1c038eff-8eff-4626-9560-fc2342c80f86", "address": "fa:16:3e:62:b5:2a", "network": {"id": "c68aa38c-df33-4336-9b66-c410f7d93cb3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-66445087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bdc8fbcac3b419ca374be1c490a20e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c038eff-8e", "ovs_interfaceid": "1c038eff-8eff-4626-9560-fc2342c80f86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.857 239942 DEBUG nova.network.os_vif_util [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:b5:2a,bridge_name='br-int',has_traffic_filtering=True,id=1c038eff-8eff-4626-9560-fc2342c80f86,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c038eff-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.859 239942 DEBUG os_vif [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:b5:2a,bridge_name='br-int',has_traffic_filtering=True,id=1c038eff-8eff-4626-9560-fc2342c80f86,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c038eff-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.861 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.861 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c038eff-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.868 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:46:14 np0005603435 nova_compute[239938]: 2026-01-31 04:46:14.871 239942 INFO os_vif [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:b5:2a,bridge_name='br-int',has_traffic_filtering=True,id=1c038eff-8eff-4626-9560-fc2342c80f86,network=Network(c68aa38c-df33-4336-9b66-c410f7d93cb3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c038eff-8e')#033[00m
Jan 30 23:46:14 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[249380]: [NOTICE]   (249384) : haproxy version is 2.8.14-c23fe91
Jan 30 23:46:14 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[249380]: [NOTICE]   (249384) : path to executable is /usr/sbin/haproxy
Jan 30 23:46:14 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[249380]: [WARNING]  (249384) : Exiting Master process...
Jan 30 23:46:14 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[249380]: [WARNING]  (249384) : Exiting Master process...
Jan 30 23:46:14 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[249380]: [ALERT]    (249384) : Current worker (249386) exited with code 143 (Terminated)
Jan 30 23:46:14 np0005603435 neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3[249380]: [WARNING]  (249384) : All workers exited. Exiting... (0)
Jan 30 23:46:14 np0005603435 systemd[1]: libpod-73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8.scope: Deactivated successfully.
Jan 30 23:46:14 np0005603435 podman[249659]: 2026-01-31 04:46:14.985696799 +0000 UTC m=+0.150013655 container died 73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:46:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 118 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 114 op/s
Jan 30 23:46:15 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8-userdata-shm.mount: Deactivated successfully.
Jan 30 23:46:15 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7b14a747677299ec93e693235a0accda250455f0a635a6b7c2c0820287739239-merged.mount: Deactivated successfully.
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.388 239942 DEBUG nova.compute.manager [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received event network-vif-unplugged-1c038eff-8eff-4626-9560-fc2342c80f86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.389 239942 DEBUG oslo_concurrency.lockutils [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.389 239942 DEBUG oslo_concurrency.lockutils [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.390 239942 DEBUG oslo_concurrency.lockutils [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.390 239942 DEBUG nova.compute.manager [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] No waiting events found dispatching network-vif-unplugged-1c038eff-8eff-4626-9560-fc2342c80f86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.390 239942 DEBUG nova.compute.manager [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received event network-vif-unplugged-1c038eff-8eff-4626-9560-fc2342c80f86 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.391 239942 DEBUG nova.compute.manager [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received event network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.391 239942 DEBUG oslo_concurrency.lockutils [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.391 239942 DEBUG oslo_concurrency.lockutils [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.392 239942 DEBUG oslo_concurrency.lockutils [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.392 239942 DEBUG nova.compute.manager [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] No waiting events found dispatching network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.392 239942 WARNING nova.compute.manager [req-a720fb77-da38-4683-91d7-bfa0a3446584 req-cf56d0d4-7303-4a12-a807-4bb496d4e560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received unexpected event network-vif-plugged-1c038eff-8eff-4626-9560-fc2342c80f86 for instance with vm_state active and task_state deleting.#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.459 239942 DEBUG oslo_concurrency.processutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5/disk.config 0b2b7bfd-3a9d-431e-911f-92e2084191c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.714s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.460 239942 INFO nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Deleting local config drive /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5/disk.config because it was imported into RBD.#033[00m
Jan 30 23:46:15 np0005603435 systemd-machined[208030]: New machine qemu-4-instance-00000004.
Jan 30 23:46:15 np0005603435 podman[249659]: 2026-01-31 04:46:15.522817626 +0000 UTC m=+0.687134462 container cleanup 73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:46:15 np0005603435 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 30 23:46:15 np0005603435 systemd[1]: libpod-conmon-73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8.scope: Deactivated successfully.
Jan 30 23:46:15 np0005603435 podman[249717]: 2026-01-31 04:46:15.569110588 +0000 UTC m=+0.080798779 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:15 np0005603435 podman[249738]: 2026-01-31 04:46:15.908346859 +0000 UTC m=+0.367723222 container remove 73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.916 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6121d2ac-ca74-402d-8582-b29f6e874ab0]: (4, ('Sat Jan 31 04:46:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 (73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8)\n73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8\nSat Jan 31 04:46:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 (73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8)\n73868143acccbbb90e3c99c5befe399058636410d79aa331e8c1f12b4c3e69f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.918 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7643c954-9307-4d0c-91b8-44eca7d7da3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.921 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc68aa38c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.923 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:15 np0005603435 kernel: tapc68aa38c-d0: left promiscuous mode
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.927 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.929 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f150bbc2-eb49-4935-a135-398637ceb5a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:15 np0005603435 nova_compute[239938]: 2026-01-31 04:46:15.934 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.952 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[51b48f24-f78d-48ed-a53e-bde48e36bd34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.954 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[eb40caae-66ad-4c56-86a9-b2dbdb54ea53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.965 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c94fc360-eda8-4176-af34-84327b804a06]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 382482, 'reachable_time': 32408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249786, 'error': None, 'target': 'ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.968 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c68aa38c-df33-4336-9b66-c410f7d93cb3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:46:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:15.968 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[6af8ad0f-84fd-4587-bb27-61cbfd0cd8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:46:15 np0005603435 systemd[1]: run-netns-ovnmeta\x2dc68aa38c\x2ddf33\x2d4336\x2d9b66\x2dc410f7d93cb3.mount: Deactivated successfully.
Jan 30 23:46:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.249 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834776.2489429, 0b2b7bfd-3a9d-431e-911f-92e2084191c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.249 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.251 239942 DEBUG nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.252 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.256 239942 INFO nova.virt.libvirt.driver [-] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Instance spawned successfully.#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.256 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.337 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.342 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.350 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.350 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.351 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.351 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.351 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.352 239942 DEBUG nova.virt.libvirt.driver [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.408 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.409 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834776.2506528, 0b2b7bfd-3a9d-431e-911f-92e2084191c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.409 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] VM Started (Lifecycle Event)#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.492 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.494 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.504 239942 INFO nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Took 6.73 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.504 239942 DEBUG nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.610 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.704 239942 INFO nova.compute.manager [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Took 10.21 seconds to build instance.#033[00m
Jan 30 23:46:16 np0005603435 nova_compute[239938]: 2026-01-31 04:46:16.921 239942 DEBUG oslo_concurrency.lockutils [None req-a688d3cf-3daf-46c4-9bff-d2abf80237a3 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006866471845206163 of space, bias 1.0, pg target 0.2059941553561849 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.649293114960935e-06 of space, bias 1.0, pg target 0.0010947879344882804 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659700355547174 of space, bias 1.0, pg target 0.19979101066641522 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.942317946003604e-07 of space, bias 4.0, pg target 0.0009530781535204325 quantized to 16 (current 16)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.158 239942 INFO nova.virt.libvirt.driver [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Deleting instance files /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877_del#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.159 239942 INFO nova.virt.libvirt.driver [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Deletion of /var/lib/nova/instances/bbb79b19-b4e9-4b82-86a3-f44ba87a2877_del complete#033[00m
Jan 30 23:46:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 112 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 174 op/s
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.288 239942 INFO nova.compute.manager [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Took 2.72 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.289 239942 DEBUG oslo.service.loopingcall [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.289 239942 DEBUG nova.compute.manager [-] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.290 239942 DEBUG nova.network.neutron [-] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.798 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquiring lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.799 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.799 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquiring lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.800 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.800 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.802 239942 INFO nova.compute.manager [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Terminating instance#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.804 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquiring lock "refresh_cache-0b2b7bfd-3a9d-431e-911f-92e2084191c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.804 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquired lock "refresh_cache-0b2b7bfd-3a9d-431e-911f-92e2084191c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:46:17 np0005603435 nova_compute[239938]: 2026-01-31 04:46:17.805 239942 DEBUG nova.network.neutron [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.112 239942 DEBUG nova.network.neutron [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.122 239942 DEBUG nova.network.neutron [-] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:46:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:46:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3432690646' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:46:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:46:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3432690646' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.357 239942 INFO nova.compute.manager [-] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Took 1.07 seconds to deallocate network for instance.#033[00m
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.369 239942 DEBUG nova.compute.manager [req-47183a6e-2778-49e3-8082-78e5bed6ed82 req-c6312fad-caa9-4046-9172-920634faec99 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Received event network-vif-deleted-1c038eff-8eff-4626-9560-fc2342c80f86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.370 239942 INFO nova.compute.manager [req-47183a6e-2778-49e3-8082-78e5bed6ed82 req-c6312fad-caa9-4046-9172-920634faec99 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Neutron deleted interface 1c038eff-8eff-4626-9560-fc2342c80f86; detaching it from the instance and deleting it from the info cache#033[00m
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.370 239942 DEBUG nova.network.neutron [req-47183a6e-2778-49e3-8082-78e5bed6ed82 req-c6312fad-caa9-4046-9172-920634faec99 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.500 239942 DEBUG nova.compute.manager [req-47183a6e-2778-49e3-8082-78e5bed6ed82 req-c6312fad-caa9-4046-9172-920634faec99 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Detach interface failed, port_id=1c038eff-8eff-4626-9560-fc2342c80f86, reason: Instance bbb79b19-b4e9-4b82-86a3-f44ba87a2877 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.560 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:18 np0005603435 nova_compute[239938]: 2026-01-31 04:46:18.561 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.042 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.111 239942 DEBUG nova.network.neutron [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.147 239942 DEBUG oslo_concurrency.processutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 93 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 169 op/s
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.187 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Releasing lock "refresh_cache-0b2b7bfd-3a9d-431e-911f-92e2084191c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.189 239942 DEBUG nova.compute.manager [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:46:19 np0005603435 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 30 23:46:19 np0005603435 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 3.450s CPU time.
Jan 30 23:46:19 np0005603435 systemd-machined[208030]: Machine qemu-4-instance-00000004 terminated.
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.610 239942 INFO nova.virt.libvirt.driver [-] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Instance destroyed successfully.#033[00m
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.612 239942 DEBUG nova.objects.instance [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lazy-loading 'resources' on Instance uuid 0b2b7bfd-3a9d-431e-911f-92e2084191c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:46:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:46:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490011213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.782 239942 DEBUG oslo_concurrency.processutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.791 239942 DEBUG nova.compute.provider_tree [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.840 239942 DEBUG nova.scheduler.client.report [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:46:19 np0005603435 nova_compute[239938]: 2026-01-31 04:46:19.864 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:20 np0005603435 nova_compute[239938]: 2026-01-31 04:46:20.031 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:20 np0005603435 nova_compute[239938]: 2026-01-31 04:46:20.239 239942 INFO nova.scheduler.client.report [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Deleted allocations for instance bbb79b19-b4e9-4b82-86a3-f44ba87a2877#033[00m
Jan 30 23:46:20 np0005603435 nova_compute[239938]: 2026-01-31 04:46:20.368 239942 DEBUG oslo_concurrency.lockutils [None req-9439c596-87b3-4a7d-8f9a-e4ab686eca31 a60e5ee062304ce4b921d51a9d0be89f 3bdc8fbcac3b419ca374be1c490a20e5 - - default default] Lock "bbb79b19-b4e9-4b82-86a3-f44ba87a2877" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 88 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.096 239942 INFO nova.virt.libvirt.driver [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Deleting instance files /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5_del#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.097 239942 INFO nova.virt.libvirt.driver [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Deletion of /var/lib/nova/instances/0b2b7bfd-3a9d-431e-911f-92e2084191c5_del complete#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.230 239942 INFO nova.compute.manager [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Took 3.04 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.231 239942 DEBUG oslo.service.loopingcall [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.231 239942 DEBUG nova.compute.manager [-] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.231 239942 DEBUG nova.network.neutron [-] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.377 239942 DEBUG nova.network.neutron [-] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.474 239942 DEBUG nova.network.neutron [-] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.595 239942 INFO nova.compute.manager [-] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Took 0.36 seconds to deallocate network for instance.#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.790 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.791 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:22 np0005603435 nova_compute[239938]: 2026-01-31 04:46:22.838 239942 DEBUG oslo_concurrency.processutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.123 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:46:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 56 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 244 op/s
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.196 239942 WARNING nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.197 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Triggering sync for uuid 0b2b7bfd-3a9d-431e-911f-92e2084191c5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.197 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:46:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1565552773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.370 239942 DEBUG oslo_concurrency.processutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.375 239942 DEBUG nova.compute.provider_tree [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.507 239942 DEBUG nova.scheduler.client.report [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.538 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.655 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:23 np0005603435 nova_compute[239938]: 2026-01-31 04:46:23.733 239942 INFO nova.scheduler.client.report [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Deleted allocations for instance 0b2b7bfd-3a9d-431e-911f-92e2084191c5#033[00m
Jan 30 23:46:24 np0005603435 nova_compute[239938]: 2026-01-31 04:46:24.037 239942 DEBUG oslo_concurrency.lockutils [None req-0ba76525-49ae-4445-9095-05e41e5c5b39 a52aa172a0ce4a7a9cafdbcfc941a80b d631da547e324145986193f504e136f8 - - default default] Lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:24 np0005603435 nova_compute[239938]: 2026-01-31 04:46:24.039 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:24 np0005603435 nova_compute[239938]: 2026-01-31 04:46:24.045 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:24 np0005603435 nova_compute[239938]: 2026-01-31 04:46:24.185 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "0b2b7bfd-3a9d-431e-911f-92e2084191c5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:24 np0005603435 nova_compute[239938]: 2026-01-31 04:46:24.866 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 30 23:46:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 30 23:46:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 660 KiB/s wr, 231 op/s
Jan 30 23:46:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 30 23:46:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 49 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 838 KiB/s wr, 167 op/s
Jan 30 23:46:29 np0005603435 nova_compute[239938]: 2026-01-31 04:46:29.094 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 57 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 151 op/s
Jan 30 23:46:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 30 23:46:29 np0005603435 nova_compute[239938]: 2026-01-31 04:46:29.802 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769834774.8009186, bbb79b19-b4e9-4b82-86a3-f44ba87a2877 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:46:29 np0005603435 nova_compute[239938]: 2026-01-31 04:46:29.803 239942 INFO nova.compute.manager [-] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:46:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 30 23:46:29 np0005603435 nova_compute[239938]: 2026-01-31 04:46:29.868 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:29 np0005603435 nova_compute[239938]: 2026-01-31 04:46:29.897 239942 DEBUG nova.compute.manager [None req-67409c6f-8ea1-4349-83e2-267121b2eec8 - - - - - -] [instance: bbb79b19-b4e9-4b82-86a3-f44ba87a2877] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:29 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 30 23:46:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 105 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 8.0 MiB/s wr, 65 op/s
Jan 30 23:46:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:46:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/456688973' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:46:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:46:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/456688973' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:46:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 161 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 15 MiB/s wr, 63 op/s
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:46:33 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:46:33 np0005603435 podman[250022]: 2026-01-31 04:46:33.589446483 +0000 UTC m=+0.034351794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:46:33 np0005603435 podman[250022]: 2026-01-31 04:46:33.900211474 +0000 UTC m=+0.345116805 container create 147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_robinson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:46:34 np0005603435 systemd[1]: Started libpod-conmon-147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291.scope.
Jan 30 23:46:34 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:46:34 np0005603435 nova_compute[239938]: 2026-01-31 04:46:34.097 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:34 np0005603435 podman[250022]: 2026-01-31 04:46:34.444813061 +0000 UTC m=+0.889718432 container init 147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_robinson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:46:34 np0005603435 podman[250022]: 2026-01-31 04:46:34.455460769 +0000 UTC m=+0.900366090 container start 147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:46:34 np0005603435 flamboyant_robinson[250038]: 167 167
Jan 30 23:46:34 np0005603435 systemd[1]: libpod-147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291.scope: Deactivated successfully.
Jan 30 23:46:34 np0005603435 conmon[250038]: conmon 147df630963d28cc82df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291.scope/container/memory.events
Jan 30 23:46:34 np0005603435 nova_compute[239938]: 2026-01-31 04:46:34.608 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769834779.6061528, 0b2b7bfd-3a9d-431e-911f-92e2084191c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:46:34 np0005603435 nova_compute[239938]: 2026-01-31 04:46:34.609 239942 INFO nova.compute.manager [-] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:46:34 np0005603435 podman[250022]: 2026-01-31 04:46:34.644993003 +0000 UTC m=+1.089898404 container attach 147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:46:34 np0005603435 podman[250022]: 2026-01-31 04:46:34.646290884 +0000 UTC m=+1.091196215 container died 147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:46:34 np0005603435 nova_compute[239938]: 2026-01-31 04:46:34.655 239942 DEBUG nova.compute.manager [None req-e7593b37-9352-490b-a93d-b0cddf651cea - - - - - -] [instance: 0b2b7bfd-3a9d-431e-911f-92e2084191c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:46:34 np0005603435 nova_compute[239938]: 2026-01-31 04:46:34.920 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-fba294ab284538953dafd44254cdac704656ee64729bd8d8006a6e070e73b509-merged.mount: Deactivated successfully.
Jan 30 23:46:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 201 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 16 MiB/s wr, 62 op/s
Jan 30 23:46:35 np0005603435 podman[250022]: 2026-01-31 04:46:35.687522088 +0000 UTC m=+2.132427409 container remove 147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_robinson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:46:35 np0005603435 systemd[1]: libpod-conmon-147df630963d28cc82df062211a84c9a1f16fa6e7edfc40d897f99c5a721d291.scope: Deactivated successfully.
Jan 30 23:46:35 np0005603435 podman[250062]: 2026-01-31 04:46:35.864669981 +0000 UTC m=+0.037784566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:46:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 30 23:46:36 np0005603435 podman[250062]: 2026-01-31 04:46:36.128398913 +0000 UTC m=+0.301513448 container create b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_galois, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:46:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 30 23:46:36 np0005603435 systemd[1]: Started libpod-conmon-b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d.scope.
Jan 30 23:46:36 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:46:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c40000b56cc10c6f4f36a1c3feab4944c80f9227acff0eaefd6a7fb2be5e2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c40000b56cc10c6f4f36a1c3feab4944c80f9227acff0eaefd6a7fb2be5e2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c40000b56cc10c6f4f36a1c3feab4944c80f9227acff0eaefd6a7fb2be5e2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c40000b56cc10c6f4f36a1c3feab4944c80f9227acff0eaefd6a7fb2be5e2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c40000b56cc10c6f4f36a1c3feab4944c80f9227acff0eaefd6a7fb2be5e2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 30 23:46:36 np0005603435 podman[250062]: 2026-01-31 04:46:36.759341863 +0000 UTC m=+0.932456378 container init b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_galois, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:46:36 np0005603435 podman[250062]: 2026-01-31 04:46:36.76791072 +0000 UTC m=+0.941025255 container start b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:46:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:46:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:46:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:46:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:46:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:46:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:46:36 np0005603435 podman[250062]: 2026-01-31 04:46:36.980542973 +0000 UTC m=+1.153657478 container attach b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_galois, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:46:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 225 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 21 MiB/s wr, 65 op/s
Jan 30 23:46:37 np0005603435 determined_galois[250078]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:46:37 np0005603435 determined_galois[250078]: --> All data devices are unavailable
Jan 30 23:46:37 np0005603435 systemd[1]: libpod-b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d.scope: Deactivated successfully.
Jan 30 23:46:37 np0005603435 podman[250098]: 2026-01-31 04:46:37.286148709 +0000 UTC m=+0.023129521 container died b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_galois, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:46:38 np0005603435 systemd[1]: var-lib-containers-storage-overlay-80c40000b56cc10c6f4f36a1c3feab4944c80f9227acff0eaefd6a7fb2be5e2c-merged.mount: Deactivated successfully.
Jan 30 23:46:38 np0005603435 podman[250098]: 2026-01-31 04:46:38.510031688 +0000 UTC m=+1.247012530 container remove b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 30 23:46:38 np0005603435 systemd[1]: libpod-conmon-b282c5262ae9bdd9da33a12c0bc22ee51a7752ea353a2c91769ea9697431e31d.scope: Deactivated successfully.
Jan 30 23:46:39 np0005603435 podman[250173]: 2026-01-31 04:46:39.025850199 +0000 UTC m=+0.106571124 container create 0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:46:39 np0005603435 podman[250173]: 2026-01-31 04:46:38.934448794 +0000 UTC m=+0.015169689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:46:39 np0005603435 nova_compute[239938]: 2026-01-31 04:46:39.097 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:39 np0005603435 systemd[1]: Started libpod-conmon-0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c.scope.
Jan 30 23:46:39 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:46:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 249 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 21 MiB/s wr, 69 op/s
Jan 30 23:46:39 np0005603435 podman[250173]: 2026-01-31 04:46:39.226511732 +0000 UTC m=+0.307232687 container init 0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:46:39 np0005603435 podman[250173]: 2026-01-31 04:46:39.235771496 +0000 UTC m=+0.316492411 container start 0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:46:39 np0005603435 modest_gould[250189]: 167 167
Jan 30 23:46:39 np0005603435 systemd[1]: libpod-0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c.scope: Deactivated successfully.
Jan 30 23:46:39 np0005603435 podman[250173]: 2026-01-31 04:46:39.332144422 +0000 UTC m=+0.412865327 container attach 0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:46:39 np0005603435 podman[250173]: 2026-01-31 04:46:39.334682313 +0000 UTC m=+0.415403248 container died 0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:46:39 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b7a100656a5e3c9c7bfe45b1086919508780fbe7d759c4bf9c271a7a7429d0aa-merged.mount: Deactivated successfully.
Jan 30 23:46:39 np0005603435 nova_compute[239938]: 2026-01-31 04:46:39.959 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:39 np0005603435 podman[250173]: 2026-01-31 04:46:39.965673433 +0000 UTC m=+1.046394348 container remove 0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:46:39 np0005603435 systemd[1]: libpod-conmon-0e1fc9111e522f56a03b600eba08881747481bb90cc9b8e0e2df878b3db31e0c.scope: Deactivated successfully.
Jan 30 23:46:40 np0005603435 podman[250216]: 2026-01-31 04:46:40.095792866 +0000 UTC m=+0.021970643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:46:40 np0005603435 podman[250216]: 2026-01-31 04:46:40.64847302 +0000 UTC m=+0.574650787 container create 40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:46:40 np0005603435 systemd[1]: Started libpod-conmon-40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58.scope.
Jan 30 23:46:40 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:46:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0817726a40dff7297361d340ba0bcf590f5ff20c843a2d963c64c4a99d6f0966/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0817726a40dff7297361d340ba0bcf590f5ff20c843a2d963c64c4a99d6f0966/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0817726a40dff7297361d340ba0bcf590f5ff20c843a2d963c64c4a99d6f0966/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0817726a40dff7297361d340ba0bcf590f5ff20c843a2d963c64c4a99d6f0966/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 297 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 19 MiB/s wr, 62 op/s
Jan 30 23:46:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 30 23:46:41 np0005603435 podman[250216]: 2026-01-31 04:46:41.302407887 +0000 UTC m=+1.228585644 container init 40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Jan 30 23:46:41 np0005603435 podman[250216]: 2026-01-31 04:46:41.307717306 +0000 UTC m=+1.233895073 container start 40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:46:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 30 23:46:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]: {
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:    "0": [
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:        {
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "devices": [
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "/dev/loop3"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            ],
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_name": "ceph_lv0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_size": "21470642176",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "name": "ceph_lv0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "tags": {
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cluster_name": "ceph",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.crush_device_class": "",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.encrypted": "0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.objectstore": "bluestore",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osd_id": "0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.type": "block",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.vdo": "0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.with_tpm": "0"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            },
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "type": "block",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "vg_name": "ceph_vg0"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:        }
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:    ],
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:    "1": [
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:        {
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "devices": [
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "/dev/loop4"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            ],
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_name": "ceph_lv1",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_size": "21470642176",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "name": "ceph_lv1",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "tags": {
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cluster_name": "ceph",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.crush_device_class": "",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.encrypted": "0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.objectstore": "bluestore",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osd_id": "1",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.type": "block",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.vdo": "0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.with_tpm": "0"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            },
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "type": "block",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "vg_name": "ceph_vg1"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:        }
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:    ],
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:    "2": [
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:        {
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "devices": [
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "/dev/loop5"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            ],
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_name": "ceph_lv2",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_size": "21470642176",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "name": "ceph_lv2",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "tags": {
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.cluster_name": "ceph",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.crush_device_class": "",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.encrypted": "0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.objectstore": "bluestore",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osd_id": "2",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.type": "block",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.vdo": "0",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:                "ceph.with_tpm": "0"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            },
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "type": "block",
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:            "vg_name": "ceph_vg2"
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:        }
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]:    ]
Jan 30 23:46:41 np0005603435 nifty_shtern[250233]: }
Jan 30 23:46:41 np0005603435 systemd[1]: libpod-40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58.scope: Deactivated successfully.
Jan 30 23:46:41 np0005603435 podman[250216]: 2026-01-31 04:46:41.763170283 +0000 UTC m=+1.689348030 container attach 40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:46:41 np0005603435 podman[250216]: 2026-01-31 04:46:41.764874625 +0000 UTC m=+1.691052402 container died 40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:46:42 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0817726a40dff7297361d340ba0bcf590f5ff20c843a2d963c64c4a99d6f0966-merged.mount: Deactivated successfully.
Jan 30 23:46:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 30 23:46:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 329 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 16 MiB/s wr, 69 op/s
Jan 30 23:46:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 30 23:46:43 np0005603435 podman[250216]: 2026-01-31 04:46:43.599813192 +0000 UTC m=+3.525990969 container remove 40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:46:43 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 30 23:46:43 np0005603435 systemd[1]: libpod-conmon-40358733ee290833c511c0641f3a9ef46438660a596b8c91450a235aea7f5a58.scope: Deactivated successfully.
Jan 30 23:46:44 np0005603435 nova_compute[239938]: 2026-01-31 04:46:44.099 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:44 np0005603435 podman[250316]: 2026-01-31 04:46:44.115463558 +0000 UTC m=+0.031293139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:46:44 np0005603435 podman[250316]: 2026-01-31 04:46:44.400497155 +0000 UTC m=+0.316326746 container create 842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wozniak, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:46:44 np0005603435 systemd[1]: Started libpod-conmon-842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c.scope.
Jan 30 23:46:44 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:46:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:46:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918066173' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:46:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:46:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918066173' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:46:44 np0005603435 podman[250316]: 2026-01-31 04:46:44.912445522 +0000 UTC m=+0.828275163 container init 842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 30 23:46:44 np0005603435 podman[250316]: 2026-01-31 04:46:44.920793464 +0000 UTC m=+0.836623045 container start 842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:46:44 np0005603435 silly_wozniak[250348]: 167 167
Jan 30 23:46:44 np0005603435 systemd[1]: libpod-842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c.scope: Deactivated successfully.
Jan 30 23:46:45 np0005603435 nova_compute[239938]: 2026-01-31 04:46:45.005 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:45 np0005603435 podman[250316]: 2026-01-31 04:46:45.061532095 +0000 UTC m=+0.977361646 container attach 842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:46:45 np0005603435 podman[250316]: 2026-01-31 04:46:45.063010381 +0000 UTC m=+0.978839972 container died 842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wozniak, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:46:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 337 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 14 MiB/s wr, 62 op/s
Jan 30 23:46:45 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d036f571d8c68182a2166f97ea549d27cdaf10bc0148e40c9c25293d808ff3f7-merged.mount: Deactivated successfully.
Jan 30 23:46:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:46.115 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:46:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:46.117 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:46:46 np0005603435 nova_compute[239938]: 2026-01-31 04:46:46.163 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:46 np0005603435 podman[250330]: 2026-01-31 04:46:46.240561329 +0000 UTC m=+1.792790139 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 30 23:46:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:46 np0005603435 podman[250316]: 2026-01-31 04:46:46.345170934 +0000 UTC m=+2.261000505 container remove 842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wozniak, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:46:46 np0005603435 podman[250375]: 2026-01-31 04:46:46.42300087 +0000 UTC m=+0.809977921 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:46:46 np0005603435 systemd[1]: libpod-conmon-842eef404462baab19d57a995c353c7384d1f749a065c51ee5b743214dac1a3c.scope: Deactivated successfully.
Jan 30 23:46:46 np0005603435 podman[250401]: 2026-01-31 04:46:46.470424319 +0000 UTC m=+0.024728140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:46:46 np0005603435 podman[250401]: 2026-01-31 04:46:46.868806714 +0000 UTC m=+0.423110525 container create b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:46:47 np0005603435 systemd[1]: Started libpod-conmon-b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6.scope.
Jan 30 23:46:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:46:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2b8ac1db898787990f8b33a7e8a026c6f0401adad7a218412f7e179f9e342b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2b8ac1db898787990f8b33a7e8a026c6f0401adad7a218412f7e179f9e342b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2b8ac1db898787990f8b33a7e8a026c6f0401adad7a218412f7e179f9e342b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2b8ac1db898787990f8b33a7e8a026c6f0401adad7a218412f7e179f9e342b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:46:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 353 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 13 MiB/s wr, 62 op/s
Jan 30 23:46:47 np0005603435 podman[250401]: 2026-01-31 04:46:47.3968616 +0000 UTC m=+0.951165391 container init b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:46:47 np0005603435 podman[250401]: 2026-01-31 04:46:47.403874359 +0000 UTC m=+0.958178130 container start b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jepsen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:46:47 np0005603435 podman[250401]: 2026-01-31 04:46:47.5301862 +0000 UTC m=+1.084489971 container attach b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:46:47 np0005603435 lvm[250497]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:46:47 np0005603435 lvm[250497]: VG ceph_vg1 finished
Jan 30 23:46:47 np0005603435 lvm[250496]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:46:47 np0005603435 lvm[250496]: VG ceph_vg0 finished
Jan 30 23:46:47 np0005603435 lvm[250499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:46:47 np0005603435 lvm[250499]: VG ceph_vg2 finished
Jan 30 23:46:48 np0005603435 stoic_jepsen[250418]: {}
Jan 30 23:46:48 np0005603435 systemd[1]: libpod-b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6.scope: Deactivated successfully.
Jan 30 23:46:48 np0005603435 podman[250401]: 2026-01-31 04:46:48.174806932 +0000 UTC m=+1.729110743 container died b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jepsen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:46:48 np0005603435 systemd[1]: libpod-b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6.scope: Consumed 1.058s CPU time.
Jan 30 23:46:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-fa2b8ac1db898787990f8b33a7e8a026c6f0401adad7a218412f7e179f9e342b-merged.mount: Deactivated successfully.
Jan 30 23:46:49 np0005603435 podman[250401]: 2026-01-31 04:46:49.083834752 +0000 UTC m=+2.638138543 container remove b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:46:49 np0005603435 nova_compute[239938]: 2026-01-31 04:46:49.101 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:46:49 np0005603435 systemd[1]: libpod-conmon-b2a678f8001215bef09c97d2febb12119eda287d279c18add76f45f0ac3c98d6.scope: Deactivated successfully.
Jan 30 23:46:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 385 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 11 MiB/s wr, 65 op/s
Jan 30 23:46:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:46:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:46:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:46:50 np0005603435 nova_compute[239938]: 2026-01-31 04:46:50.007 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:50 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:46:50 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:46:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 433 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 14 MiB/s wr, 55 op/s
Jan 30 23:46:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 30 23:46:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 30 23:46:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 30 23:46:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 489 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 17 MiB/s wr, 53 op/s
Jan 30 23:46:54 np0005603435 nova_compute[239938]: 2026-01-31 04:46:54.103 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:55 np0005603435 nova_compute[239938]: 2026-01-31 04:46:55.010 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 513 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 18 MiB/s wr, 48 op/s
Jan 30 23:46:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:55.910 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:46:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:55.911 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:46:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:55.911 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:46:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:46:56.120 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:46:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:46:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 521 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 17 MiB/s wr, 53 op/s
Jan 30 23:46:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:46:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2451880292' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:46:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:46:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2451880292' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:46:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:46:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2976378457' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:46:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:46:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2976378457' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:46:59 np0005603435 nova_compute[239938]: 2026-01-31 04:46:59.107 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:46:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 553 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 17 MiB/s wr, 54 op/s
Jan 30 23:47:00 np0005603435 nova_compute[239938]: 2026-01-31 04:47:00.012 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:00 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:00Z|00051|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 30 23:47:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 577 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 14 MiB/s wr, 53 op/s
Jan 30 23:47:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:47:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3268233626' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:47:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:47:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3268233626' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:47:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 649 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 16 MiB/s wr, 68 op/s
Jan 30 23:47:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 30 23:47:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 30 23:47:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 30 23:47:04 np0005603435 nova_compute[239938]: 2026-01-31 04:47:04.110 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 30 23:47:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 30 23:47:05 np0005603435 nova_compute[239938]: 2026-01-31 04:47:05.073 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 30 23:47:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 737 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 27 MiB/s wr, 51 op/s
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:47:06
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:47:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:47:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 817 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 33 MiB/s wr, 46 op/s
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:47:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:47:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 30 23:47:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 30 23:47:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 30 23:47:08 np0005603435 nova_compute[239938]: 2026-01-31 04:47:08.962 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:09 np0005603435 nova_compute[239938]: 2026-01-31 04:47:09.112 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 865 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 36 MiB/s wr, 77 op/s
Jan 30 23:47:10 np0005603435 nova_compute[239938]: 2026-01-31 04:47:10.076 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:10 np0005603435 nova_compute[239938]: 2026-01-31 04:47:10.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 945 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 41 MiB/s wr, 66 op/s
Jan 30 23:47:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 30 23:47:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 30 23:47:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 30 23:47:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:47:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1439878787' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:47:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:47:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1439878787' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:47:11 np0005603435 nova_compute[239938]: 2026-01-31 04:47:11.882 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:11 np0005603435 nova_compute[239938]: 2026-01-31 04:47:11.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:11 np0005603435 nova_compute[239938]: 2026-01-31 04:47:11.886 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:47:11 np0005603435 nova_compute[239938]: 2026-01-31 04:47:11.886 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:47:11 np0005603435 nova_compute[239938]: 2026-01-31 04:47:11.920 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:47:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 30 23:47:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 30 23:47:12 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 30 23:47:12 np0005603435 nova_compute[239938]: 2026-01-31 04:47:12.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:12 np0005603435 nova_compute[239938]: 2026-01-31 04:47:12.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:12 np0005603435 nova_compute[239938]: 2026-01-31 04:47:12.929 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:12 np0005603435 nova_compute[239938]: 2026-01-31 04:47:12.929 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:12 np0005603435 nova_compute[239938]: 2026-01-31 04:47:12.930 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:12 np0005603435 nova_compute[239938]: 2026-01-31 04:47:12.930 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:47:12 np0005603435 nova_compute[239938]: 2026-01-31 04:47:12.930 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 1009 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 32 MiB/s wr, 176 op/s
Jan 30 23:47:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:47:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500574053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:47:13 np0005603435 nova_compute[239938]: 2026-01-31 04:47:13.547 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:13 np0005603435 nova_compute[239938]: 2026-01-31 04:47:13.755 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:47:13 np0005603435 nova_compute[239938]: 2026-01-31 04:47:13.756 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4761MB free_disk=59.98823764361441GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:47:13 np0005603435 nova_compute[239938]: 2026-01-31 04:47:13.757 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:13 np0005603435 nova_compute[239938]: 2026-01-31 04:47:13.757 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.022 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.023 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.085 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.115 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:47:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123600050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:47:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.748 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.755 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.793 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.821 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:47:14 np0005603435 nova_compute[239938]: 2026-01-31 04:47:14.822 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 30 23:47:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 30 23:47:15 np0005603435 nova_compute[239938]: 2026-01-31 04:47:15.115 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 2 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 288 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 32 MiB/s wr, 123 op/s
Jan 30 23:47:15 np0005603435 nova_compute[239938]: 2026-01-31 04:47:15.823 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:15 np0005603435 nova_compute[239938]: 2026-01-31 04:47:15.824 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:15 np0005603435 nova_compute[239938]: 2026-01-31 04:47:15.825 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2465468690' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2465468690' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442344311' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:47:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442344311' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:47:16 np0005603435 nova_compute[239938]: 2026-01-31 04:47:16.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:47:17 np0005603435 podman[250584]: 2026-01-31 04:47:17.095509331 +0000 UTC m=+0.057584407 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.209223897349844e-07 of space, bias 1.0, pg target 0.00021627671692049533 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.643347011177911e-06 of space, bias 1.0, pg target 0.0010930041033533732 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.682267233924396e-07 of space, bias 1.0, pg target 8.046801701773189e-05 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.017206171011174883 of space, bias 1.0, pg target 5.161851303352465 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.29070063240323e-07 of space, bias 4.0, pg target 0.000978302674623581 quantized to 16 (current 16)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011255555284235201 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012381110812658724 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015007407045646937 quantized to 32 (current 32)
Jan 30 23:47:17 np0005603435 podman[250585]: 2026-01-31 04:47:17.125019196 +0000 UTC m=+0.087857300 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:47:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 2 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 288 active+clean; 793 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 21 MiB/s wr, 181 op/s
Jan 30 23:47:19 np0005603435 nova_compute[239938]: 2026-01-31 04:47:19.116 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 2 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 288 active+clean; 497 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 8.3 MiB/s wr, 113 op/s
Jan 30 23:47:20 np0005603435 nova_compute[239938]: 2026-01-31 04:47:20.118 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 7.0 MiB/s wr, 118 op/s
Jan 30 23:47:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 30 23:47:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 30 23:47:21 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 30 23:47:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 1.0 MiB/s wr, 114 op/s
Jan 30 23:47:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 30 23:47:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 30 23:47:23 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 30 23:47:24 np0005603435 nova_compute[239938]: 2026-01-31 04:47:24.119 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:25 np0005603435 nova_compute[239938]: 2026-01-31 04:47:25.121 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.2 KiB/s wr, 74 op/s
Jan 30 23:47:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 30 23:47:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 30 23:47:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 30 23:47:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 30 23:47:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 30 23:47:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 30 23:47:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:47:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2891837417' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:47:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:47:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2891837417' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:47:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.3 KiB/s wr, 30 op/s
Jan 30 23:47:29 np0005603435 nova_compute[239938]: 2026-01-31 04:47:29.121 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 KiB/s wr, 36 op/s
Jan 30 23:47:30 np0005603435 nova_compute[239938]: 2026-01-31 04:47:30.124 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 30 23:47:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 30 23:47:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 30 23:47:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 30 23:47:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:47:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1861533574' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:47:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:47:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1861533574' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:47:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 KiB/s wr, 47 op/s
Jan 30 23:47:34 np0005603435 nova_compute[239938]: 2026-01-31 04:47:34.124 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:35 np0005603435 nova_compute[239938]: 2026-01-31 04:47:35.174 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 KiB/s wr, 41 op/s
Jan 30 23:47:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:47:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:47:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:47:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:47:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:47:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:47:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.648 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.649 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.664 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.664 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.686 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.690 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.837 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.838 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.840 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.849 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:47:37 np0005603435 nova_compute[239938]: 2026-01-31 04:47:37.849 239942 INFO nova.compute.claims [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.150 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:47:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120588761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.679 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.685 239942 DEBUG nova.compute.provider_tree [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.703 239942 DEBUG nova.scheduler.client.report [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.764 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.766 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.770 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.779 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.780 239942 INFO nova.compute.claims [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.840 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.841 239942 DEBUG nova.network.neutron [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.896 239942 INFO nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:47:38 np0005603435 nova_compute[239938]: 2026-01-31 04:47:38.944 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.020 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.100 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.104 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.104 239942 INFO nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Creating image(s)#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.140 239942 DEBUG nova.storage.rbd_utils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] rbd image 2de06a6e-707c-434b-980d-ab52c01abb9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.179 239942 DEBUG nova.storage.rbd_utils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] rbd image 2de06a6e-707c-434b-980d-ab52c01abb9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.214 239942 DEBUG nova.storage.rbd_utils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] rbd image 2de06a6e-707c-434b-980d-ab52c01abb9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.220 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.237 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.274 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.275 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.276 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.277 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.314 239942 DEBUG nova.storage.rbd_utils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] rbd image 2de06a6e-707c-434b-980d-ab52c01abb9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.320 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 2de06a6e-707c-434b-980d-ab52c01abb9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:47:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255083611' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.584 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.590 239942 DEBUG nova.compute.provider_tree [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.605 239942 DEBUG nova.scheduler.client.report [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.628 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.629 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.657 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 2de06a6e-707c-434b-980d-ab52c01abb9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.696 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.698 239942 DEBUG nova.network.neutron [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.749 239942 INFO nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.760 239942 DEBUG nova.storage.rbd_utils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] resizing rbd image 2de06a6e-707c-434b-980d-ab52c01abb9e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.800 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.870 239942 DEBUG nova.objects.instance [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lazy-loading 'migration_context' on Instance uuid 2de06a6e-707c-434b-980d-ab52c01abb9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.896 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.896 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Ensure instance console log exists: /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.897 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.897 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.897 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.931 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.932 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.933 239942 INFO nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Creating image(s)#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.958 239942 DEBUG nova.storage.rbd_utils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:39 np0005603435 nova_compute[239938]: 2026-01-31 04:47:39.982 239942 DEBUG nova.storage.rbd_utils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.006 239942 DEBUG nova.storage.rbd_utils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.009 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.079 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.081 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.082 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.082 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.116 239942 DEBUG nova.storage.rbd_utils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.120 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.171 239942 DEBUG nova.policy [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f51271330a6d46498b473f0d2595c3ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b8b11aff4b494f4eb1376cfe5754bac8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.176 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.182 239942 DEBUG nova.policy [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0b66a987b14d4c37aedbb2fe48fd1547', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2bb69332e8af48ee847370d546eaee1e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.444 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.518 239942 DEBUG nova.storage.rbd_utils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] resizing rbd image 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.611 239942 DEBUG nova.objects.instance [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'migration_context' on Instance uuid 80f921cb-ec48-41f8-88b0-3ba2a51efd0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.624 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.625 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Ensure instance console log exists: /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.626 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.626 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.627 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:40 np0005603435 nova_compute[239938]: 2026-01-31 04:47:40.781 239942 DEBUG nova.network.neutron [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Successfully created port: 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:47:41 np0005603435 nova_compute[239938]: 2026-01-31 04:47:41.052 239942 DEBUG nova.network.neutron [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Successfully created port: f1498a6d-42eb-444b-9b53-825529f5cb1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:47:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 53 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 445 KiB/s wr, 29 op/s
Jan 30 23:47:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:41 np0005603435 nova_compute[239938]: 2026-01-31 04:47:41.674 239942 DEBUG nova.network.neutron [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Successfully updated port: f1498a6d-42eb-444b-9b53-825529f5cb1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:47:41 np0005603435 nova_compute[239938]: 2026-01-31 04:47:41.692 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:47:41 np0005603435 nova_compute[239938]: 2026-01-31 04:47:41.694 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquired lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:47:41 np0005603435 nova_compute[239938]: 2026-01-31 04:47:41.694 239942 DEBUG nova.network.neutron [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:47:41 np0005603435 nova_compute[239938]: 2026-01-31 04:47:41.832 239942 DEBUG nova.network.neutron [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.003 239942 DEBUG nova.compute.manager [req-7934a404-2a80-430a-a8c4-bcd9e2448b92 req-7c72ce9a-466f-4262-852d-27715352892d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event network-changed-f1498a6d-42eb-444b-9b53-825529f5cb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.004 239942 DEBUG nova.compute.manager [req-7934a404-2a80-430a-a8c4-bcd9e2448b92 req-7c72ce9a-466f-4262-852d-27715352892d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Refreshing instance network info cache due to event network-changed-f1498a6d-42eb-444b-9b53-825529f5cb1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.004 239942 DEBUG oslo_concurrency.lockutils [req-7934a404-2a80-430a-a8c4-bcd9e2448b92 req-7c72ce9a-466f-4262-852d-27715352892d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.128 239942 DEBUG nova.network.neutron [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Successfully updated port: 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.141 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.142 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquired lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.142 239942 DEBUG nova.network.neutron [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.224 239942 DEBUG nova.compute.manager [req-6fff5872-97c6-4941-8bce-cd9b13c67fe0 req-2038f3ea-59ee-4fd2-b5f8-559b727893bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received event network-changed-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.225 239942 DEBUG nova.compute.manager [req-6fff5872-97c6-4941-8bce-cd9b13c67fe0 req-2038f3ea-59ee-4fd2-b5f8-559b727893bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Refreshing instance network info cache due to event network-changed-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.225 239942 DEBUG oslo_concurrency.lockutils [req-6fff5872-97c6-4941-8bce-cd9b13c67fe0 req-2038f3ea-59ee-4fd2-b5f8-559b727893bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.291 239942 DEBUG nova.network.neutron [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.408 239942 DEBUG nova.network.neutron [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updating instance_info_cache with network_info: [{"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.429 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Releasing lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.430 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Instance network_info: |[{"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.430 239942 DEBUG oslo_concurrency.lockutils [req-7934a404-2a80-430a-a8c4-bcd9e2448b92 req-7c72ce9a-466f-4262-852d-27715352892d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.431 239942 DEBUG nova.network.neutron [req-7934a404-2a80-430a-a8c4-bcd9e2448b92 req-7c72ce9a-466f-4262-852d-27715352892d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Refreshing network info cache for port f1498a6d-42eb-444b-9b53-825529f5cb1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.436 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Start _get_guest_xml network_info=[{"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.442 239942 WARNING nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.447 239942 DEBUG nova.virt.libvirt.host [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.448 239942 DEBUG nova.virt.libvirt.host [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.452 239942 DEBUG nova.virt.libvirt.host [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.453 239942 DEBUG nova.virt.libvirt.host [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.453 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.454 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.455 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.455 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.456 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.456 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.456 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.457 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.457 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.458 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.458 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.458 239942 DEBUG nova.virt.hardware [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.463 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.916 239942 DEBUG nova.network.neutron [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Updating instance_info_cache with network_info: [{"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.933 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Releasing lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.934 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Instance network_info: |[{"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.935 239942 DEBUG oslo_concurrency.lockutils [req-6fff5872-97c6-4941-8bce-cd9b13c67fe0 req-2038f3ea-59ee-4fd2-b5f8-559b727893bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.936 239942 DEBUG nova.network.neutron [req-6fff5872-97c6-4941-8bce-cd9b13c67fe0 req-2038f3ea-59ee-4fd2-b5f8-559b727893bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Refreshing network info cache for port 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.942 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Start _get_guest_xml network_info=[{"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.948 239942 WARNING nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.954 239942 DEBUG nova.virt.libvirt.host [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.955 239942 DEBUG nova.virt.libvirt.host [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.963 239942 DEBUG nova.virt.libvirt.host [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.964 239942 DEBUG nova.virt.libvirt.host [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.964 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.965 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.966 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.966 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.966 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.967 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.967 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.968 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.968 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.969 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.969 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.969 239942 DEBUG nova.virt.hardware [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:47:42 np0005603435 nova_compute[239938]: 2026-01-31 04:47:42.974 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:47:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467123633' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.028 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.058 239942 DEBUG nova.storage.rbd_utils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] rbd image 2de06a6e-707c-434b-980d-ab52c01abb9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.063 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 118 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.7 MiB/s wr, 56 op/s
Jan 30 23:47:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:47:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2347756295' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.535 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.564 239942 DEBUG nova.storage.rbd_utils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.568 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:47:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129499708' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.617 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.619 239942 DEBUG nova.virt.libvirt.vif [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:47:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1430074972',display_name='tempest-VolumesExtendAttachedTest-instance-1430074972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1430074972',id=5,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzPN2P3X8OOSzjbiS4D0CkZSzKSGgVBUZMk1xvOhsc7ycfoOzirzhWNOLqmqsMOlSnX/agcppGzCjsfDa+iMVhnTYHmcD/fg7WgCyqoyG/ORaEQfSpvUjcfbgpTfiszng==',key_name='tempest-keypair-1954170071',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2bb69332e8af48ee847370d546eaee1e',ramdisk_id='',reservation_id='r-g3u4u6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-212133215',owner_user_name='tempest-VolumesExtendAttachedTest-212133215-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:47:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b66a987b14d4c37aedbb2fe48fd1547',uuid=2de06a6e-707c-434b-980d-ab52c01abb9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.620 239942 DEBUG nova.network.os_vif_util [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Converting VIF {"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.621 239942 DEBUG nova.network.os_vif_util [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:66:d6,bridge_name='br-int',has_traffic_filtering=True,id=f1498a6d-42eb-444b-9b53-825529f5cb1c,network=Network(5c3579c7-dc9d-4cf7-9e43-1aa98a65254a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1498a6d-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.623 239942 DEBUG nova.objects.instance [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2de06a6e-707c-434b-980d-ab52c01abb9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.640 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <uuid>2de06a6e-707c-434b-980d-ab52c01abb9e</uuid>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <name>instance-00000005</name>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesExtendAttachedTest-instance-1430074972</nova:name>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:47:42</nova:creationTime>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <nova:user uuid="0b66a987b14d4c37aedbb2fe48fd1547">tempest-VolumesExtendAttachedTest-212133215-project-member</nova:user>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <nova:project uuid="2bb69332e8af48ee847370d546eaee1e">tempest-VolumesExtendAttachedTest-212133215</nova:project>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <nova:port uuid="f1498a6d-42eb-444b-9b53-825529f5cb1c">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <entry name="serial">2de06a6e-707c-434b-980d-ab52c01abb9e</entry>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <entry name="uuid">2de06a6e-707c-434b-980d-ab52c01abb9e</entry>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/2de06a6e-707c-434b-980d-ab52c01abb9e_disk">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/2de06a6e-707c-434b-980d-ab52c01abb9e_disk.config">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:74:66:d6"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <target dev="tapf1498a6d-42"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e/console.log" append="off"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:47:43 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:47:43 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:47:43 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:47:43 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.641 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Preparing to wait for external event network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.641 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.642 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.642 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.643 239942 DEBUG nova.virt.libvirt.vif [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:47:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1430074972',display_name='tempest-VolumesExtendAttachedTest-instance-1430074972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1430074972',id=5,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzPN2P3X8OOSzjbiS4D0CkZSzKSGgVBUZMk1xvOhsc7ycfoOzirzhWNOLqmqsMOlSnX/agcppGzCjsfDa+iMVhnTYHmcD/fg7WgCyqoyG/ORaEQfSpvUjcfbgpTfiszng==',key_name='tempest-keypair-1954170071',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2bb69332e8af48ee847370d546eaee1e',ramdisk_id='',reservation_id='r-g3u4u6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-212133215',owner_user_name='tempest-VolumesExtendAttachedTest-212133215-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:47:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b66a987b14d4c37aedbb2fe48fd1547',uuid=2de06a6e-707c-434b-980d-ab52c01abb9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.644 239942 DEBUG nova.network.os_vif_util [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Converting VIF {"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.645 239942 DEBUG nova.network.os_vif_util [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:66:d6,bridge_name='br-int',has_traffic_filtering=True,id=f1498a6d-42eb-444b-9b53-825529f5cb1c,network=Network(5c3579c7-dc9d-4cf7-9e43-1aa98a65254a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1498a6d-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.645 239942 DEBUG os_vif [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:66:d6,bridge_name='br-int',has_traffic_filtering=True,id=f1498a6d-42eb-444b-9b53-825529f5cb1c,network=Network(5c3579c7-dc9d-4cf7-9e43-1aa98a65254a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1498a6d-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.646 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.647 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.648 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.650 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.651 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1498a6d-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.652 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1498a6d-42, col_values=(('external_ids', {'iface-id': 'f1498a6d-42eb-444b-9b53-825529f5cb1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:66:d6', 'vm-uuid': '2de06a6e-707c-434b-980d-ab52c01abb9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.654 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:43 np0005603435 NetworkManager[49097]: <info>  [1769834863.6551] manager: (tapf1498a6d-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.657 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.660 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.661 239942 INFO os_vif [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:66:d6,bridge_name='br-int',has_traffic_filtering=True,id=f1498a6d-42eb-444b-9b53-825529f5cb1c,network=Network(5c3579c7-dc9d-4cf7-9e43-1aa98a65254a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1498a6d-42')#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.706 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.707 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.707 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] No VIF found with MAC fa:16:3e:74:66:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.707 239942 INFO nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Using config drive#033[00m
Jan 30 23:47:43 np0005603435 nova_compute[239938]: 2026-01-31 04:47:43.726 239942 DEBUG nova.storage.rbd_utils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] rbd image 2de06a6e-707c-434b-980d-ab52c01abb9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:47:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040967728' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.102 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.104 239942 DEBUG nova.virt.libvirt.vif [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:47:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-763377099',display_name='tempest-VolumesBackupsTest-instance-763377099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-763377099',id=6,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBONDHwSZ9MkJyo9D2CF/S4KX9O4IxyXttW+K6l+2Zxa4Xv3Vjls90siP2Qj8A8dOzO8uS8EJ2U1JAWq2ETYB11Ins8/2bJogCYXemZjCXUombJMigKOSeOms1DNvDhJevg==',key_name='tempest-keypair-1798745008',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b8b11aff4b494f4eb1376cfe5754bac8',ramdisk_id='',reservation_id='r-o23rwagz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1503004541',owner_user_name='tempest-VolumesBackupsTest-1503004541-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:47:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f51271330a6d46498b473f0d2595c3ac',uuid=80f921cb-ec48-41f8-88b0-3ba2a51efd0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.105 239942 DEBUG nova.network.os_vif_util [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converting VIF {"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.106 239942 DEBUG nova.network.os_vif_util [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:81:a2,bridge_name='br-int',has_traffic_filtering=True,id=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ab155d-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.107 239942 DEBUG nova.objects.instance [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80f921cb-ec48-41f8-88b0-3ba2a51efd0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.125 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <uuid>80f921cb-ec48-41f8-88b0-3ba2a51efd0c</uuid>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <name>instance-00000006</name>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesBackupsTest-instance-763377099</nova:name>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:47:42</nova:creationTime>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <nova:user uuid="f51271330a6d46498b473f0d2595c3ac">tempest-VolumesBackupsTest-1503004541-project-member</nova:user>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <nova:project uuid="b8b11aff4b494f4eb1376cfe5754bac8">tempest-VolumesBackupsTest-1503004541</nova:project>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <nova:port uuid="21ab155d-7b14-4fa4-b3a0-113a0e6c6abb">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <entry name="serial">80f921cb-ec48-41f8-88b0-3ba2a51efd0c</entry>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <entry name="uuid">80f921cb-ec48-41f8-88b0-3ba2a51efd0c</entry>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk.config">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:59:81:a2"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <target dev="tap21ab155d-7b"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c/console.log" append="off"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:47:44 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:47:44 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:47:44 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:47:44 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.127 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Preparing to wait for external event network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.128 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.129 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.130 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.131 239942 DEBUG nova.virt.libvirt.vif [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:47:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-763377099',display_name='tempest-VolumesBackupsTest-instance-763377099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-763377099',id=6,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBONDHwSZ9MkJyo9D2CF/S4KX9O4IxyXttW+K6l+2Zxa4Xv3Vjls90siP2Qj8A8dOzO8uS8EJ2U1JAWq2ETYB11Ins8/2bJogCYXemZjCXUombJMigKOSeOms1DNvDhJevg==',key_name='tempest-keypair-1798745008',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b8b11aff4b494f4eb1376cfe5754bac8',ramdisk_id='',reservation_id='r-o23rwagz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1503004541',owner_user_name='tempest-VolumesBackupsTest-1503004541-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:47:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f51271330a6d46498b473f0d2595c3ac',uuid=80f921cb-ec48-41f8-88b0-3ba2a51efd0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.132 239942 DEBUG nova.network.os_vif_util [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converting VIF {"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.134 239942 DEBUG nova.network.os_vif_util [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:81:a2,bridge_name='br-int',has_traffic_filtering=True,id=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ab155d-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.135 239942 DEBUG os_vif [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:81:a2,bridge_name='br-int',has_traffic_filtering=True,id=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ab155d-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.135 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.137 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.138 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.138 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.158 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.159 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21ab155d-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.160 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21ab155d-7b, col_values=(('external_ids', {'iface-id': '21ab155d-7b14-4fa4-b3a0-113a0e6c6abb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:81:a2', 'vm-uuid': '80f921cb-ec48-41f8-88b0-3ba2a51efd0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.162 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:44 np0005603435 NetworkManager[49097]: <info>  [1769834864.1630] manager: (tap21ab155d-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.165 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.167 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.169 239942 INFO os_vif [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:81:a2,bridge_name='br-int',has_traffic_filtering=True,id=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ab155d-7b')#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.210 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.211 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.211 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No VIF found with MAC fa:16:3e:59:81:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.212 239942 INFO nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Using config drive#033[00m
Jan 30 23:47:44 np0005603435 nova_compute[239938]: 2026-01-31 04:47:44.241 239942 DEBUG nova.storage.rbd_utils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.213 239942 INFO nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Creating config drive at /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e/disk.config#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.221 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0diu89d6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 55 op/s
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.346 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0diu89d6" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.380 239942 DEBUG nova.storage.rbd_utils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] rbd image 2de06a6e-707c-434b-980d-ab52c01abb9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.384 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e/disk.config 2de06a6e-707c-434b-980d-ab52c01abb9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.523 239942 DEBUG oslo_concurrency.processutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e/disk.config 2de06a6e-707c-434b-980d-ab52c01abb9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.524 239942 INFO nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Deleting local config drive /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e/disk.config because it was imported into RBD.#033[00m
Jan 30 23:47:45 np0005603435 kernel: tapf1498a6d-42: entered promiscuous mode
Jan 30 23:47:45 np0005603435 NetworkManager[49097]: <info>  [1769834865.5751] manager: (tapf1498a6d-42): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 30 23:47:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:45Z|00052|binding|INFO|Claiming lport f1498a6d-42eb-444b-9b53-825529f5cb1c for this chassis.
Jan 30 23:47:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:45Z|00053|binding|INFO|f1498a6d-42eb-444b-9b53-825529f5cb1c: Claiming fa:16:3e:74:66:d6 10.100.0.5
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.578 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.595 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:45 np0005603435 systemd-machined[208030]: New machine qemu-5-instance-00000005.
Jan 30 23:47:45 np0005603435 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 30 23:47:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:45Z|00054|binding|INFO|Setting lport f1498a6d-42eb-444b-9b53-825529f5cb1c ovn-installed in OVS
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.628 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:45 np0005603435 systemd-udevd[251225]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:47:45 np0005603435 NetworkManager[49097]: <info>  [1769834865.6518] device (tapf1498a6d-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:47:45 np0005603435 NetworkManager[49097]: <info>  [1769834865.6536] device (tapf1498a6d-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:47:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:45Z|00055|binding|INFO|Setting lport f1498a6d-42eb-444b-9b53-825529f5cb1c up in Southbound
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.687 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:66:d6 10.100.0.5'], port_security=['fa:16:3e:74:66:d6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2de06a6e-707c-434b-980d-ab52c01abb9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2bb69332e8af48ee847370d546eaee1e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd1f874e-55a9-4680-a797-e091d433d6bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ae6db6-c1c3-4fcb-b05f-8f86ed2cfe9a, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=f1498a6d-42eb-444b-9b53-825529f5cb1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.689 156017 INFO neutron.agent.ovn.metadata.agent [-] Port f1498a6d-42eb-444b-9b53-825529f5cb1c in datapath 5c3579c7-dc9d-4cf7-9e43-1aa98a65254a bound to our chassis#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.692 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c3579c7-dc9d-4cf7-9e43-1aa98a65254a#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.703 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[38e4045b-9864-49ab-9d0d-c702606bb627]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.704 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c3579c7-d1 in ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.706 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c3579c7-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.706 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5361f43b-65be-45a1-acdf-b7db528132b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.707 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[21f51aac-a956-4af5-b531-5c61fada5de1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.721 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3a9bb0-5c01-4740-b8bb-8846a163fe25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.734 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9824a5-8bd2-4963-8e75-fdf34fac116e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.756 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2e17ee-946e-4544-b67c-75905a4b84b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 NetworkManager[49097]: <info>  [1769834865.7631] manager: (tap5c3579c7-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.764 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[700e7d7d-3d37-4300-8adb-afd8803a06c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.771 239942 INFO nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Creating config drive at /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c/disk.config#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.793 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0xbn_gx8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.805 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8e97a1-2f11-4076-857c-6d03d6df40db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.815 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[3979764d-d265-48c2-a039-306c7bc3c443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 NetworkManager[49097]: <info>  [1769834865.8368] device (tap5c3579c7-d0): carrier: link connected
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.842 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[89275d2a-aed0-49f5-8d95-a0c3300ea5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.860 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[020b7011-1eb6-4f43-ae1b-5ae7164736d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c3579c7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:7e:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392023, 'reachable_time': 15717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251264, 'error': None, 'target': 'ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.875 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d449f4-6317-4ed4-9bff-95a9f928dc87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:7e4e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 392023, 'tstamp': 392023}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251265, 'error': None, 'target': 'ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.895 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7d9ef5-b34c-40da-920d-8c293f711179]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c3579c7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:7e:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392023, 'reachable_time': 15717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251266, 'error': None, 'target': 'ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.922 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0xbn_gx8" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.928 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cf27652e-3037-453a-b73e-5f58f2fec627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.959 239942 DEBUG nova.storage.rbd_utils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.964 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c/disk.config 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.981 239942 DEBUG nova.network.neutron [req-7934a404-2a80-430a-a8c4-bcd9e2448b92 req-7c72ce9a-466f-4262-852d-27715352892d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updated VIF entry in instance network info cache for port f1498a6d-42eb-444b-9b53-825529f5cb1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.983 239942 DEBUG nova.network.neutron [req-7934a404-2a80-430a-a8c4-bcd9e2448b92 req-7c72ce9a-466f-4262-852d-27715352892d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updating instance_info_cache with network_info: [{"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.987 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[462b5928-ea0a-4ae8-b0b8-06f7a223dc38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.989 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c3579c7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.990 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.991 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c3579c7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:45 np0005603435 kernel: tap5c3579c7-d0: entered promiscuous mode
Jan 30 23:47:45 np0005603435 NetworkManager[49097]: <info>  [1769834865.9945] manager: (tap5c3579c7-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 30 23:47:45 np0005603435 nova_compute[239938]: 2026-01-31 04:47:45.993 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:45.999 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c3579c7-d0, col_values=(('external_ids', {'iface-id': '41e5b095-6a71-4bf4-9ca9-06fa02387e1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:46Z|00056|binding|INFO|Releasing lport 41e5b095-6a71-4bf4-9ca9-06fa02387e1d from this chassis (sb_readonly=0)
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.001 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.013 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.014 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c3579c7-dc9d-4cf7-9e43-1aa98a65254a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c3579c7-dc9d-4cf7-9e43-1aa98a65254a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.015 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[843d0b87-d5d7-43b3-8bac-60fec68c9680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.016 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/5c3579c7-dc9d-4cf7-9e43-1aa98a65254a.pid.haproxy
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 5c3579c7-dc9d-4cf7-9e43-1aa98a65254a
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.018 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a', 'env', 'PROCESS_TAG=haproxy-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c3579c7-dc9d-4cf7-9e43-1aa98a65254a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.062 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834866.0619755, 2de06a6e-707c-434b-980d-ab52c01abb9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.063 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] VM Started (Lifecycle Event)#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.090 239942 DEBUG oslo_concurrency.lockutils [req-7934a404-2a80-430a-a8c4-bcd9e2448b92 req-7c72ce9a-466f-4262-852d-27715352892d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.092 239942 DEBUG nova.network.neutron [req-6fff5872-97c6-4941-8bce-cd9b13c67fe0 req-2038f3ea-59ee-4fd2-b5f8-559b727893bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Updated VIF entry in instance network info cache for port 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.093 239942 DEBUG nova.network.neutron [req-6fff5872-97c6-4941-8bce-cd9b13c67fe0 req-2038f3ea-59ee-4fd2-b5f8-559b727893bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Updating instance_info_cache with network_info: [{"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.101 239942 DEBUG oslo_concurrency.processutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c/disk.config 80f921cb-ec48-41f8-88b0-3ba2a51efd0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.101 239942 INFO nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Deleting local config drive /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c/disk.config because it was imported into RBD.#033[00m
Jan 30 23:47:46 np0005603435 kernel: tap21ab155d-7b: entered promiscuous mode
Jan 30 23:47:46 np0005603435 NetworkManager[49097]: <info>  [1769834866.1418] manager: (tap21ab155d-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 30 23:47:46 np0005603435 systemd-udevd[251252]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.144 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:46Z|00057|binding|INFO|Claiming lport 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb for this chassis.
Jan 30 23:47:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:46Z|00058|binding|INFO|21ab155d-7b14-4fa4-b3a0-113a0e6c6abb: Claiming fa:16:3e:59:81:a2 10.100.0.8
Jan 30 23:47:46 np0005603435 NetworkManager[49097]: <info>  [1769834866.1580] device (tap21ab155d-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:47:46 np0005603435 NetworkManager[49097]: <info>  [1769834866.1588] device (tap21ab155d-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.164 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:81:a2 10.100.0.8'], port_security=['fa:16:3e:59:81:a2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '80f921cb-ec48-41f8-88b0-3ba2a51efd0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b11aff4b494f4eb1376cfe5754bac8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2702eba-8d5c-40d1-af57-1b1c2fa6cbe3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c4453b0-f040-4fe4-88f1-8a0ec8ff54c7, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:47:46 np0005603435 systemd-machined[208030]: New machine qemu-6-instance-00000006.
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.173 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.176 239942 DEBUG oslo_concurrency.lockutils [req-6fff5872-97c6-4941-8bce-cd9b13c67fe0 req-2038f3ea-59ee-4fd2-b5f8-559b727893bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.182 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834866.0621662, 2de06a6e-707c-434b-980d-ab52c01abb9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.182 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:47:46 np0005603435 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.195 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:46Z|00059|binding|INFO|Setting lport 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb ovn-installed in OVS
Jan 30 23:47:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:46Z|00060|binding|INFO|Setting lport 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb up in Southbound
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.200 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.219 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.224 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.243 239942 DEBUG nova.compute.manager [req-21f679ff-a65b-472d-a640-0b1b64b934fe req-fbf35a8b-d1a9-4de9-8c48-c94683ad8a4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.244 239942 DEBUG oslo_concurrency.lockutils [req-21f679ff-a65b-472d-a640-0b1b64b934fe req-fbf35a8b-d1a9-4de9-8c48-c94683ad8a4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.245 239942 DEBUG oslo_concurrency.lockutils [req-21f679ff-a65b-472d-a640-0b1b64b934fe req-fbf35a8b-d1a9-4de9-8c48-c94683ad8a4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.245 239942 DEBUG oslo_concurrency.lockutils [req-21f679ff-a65b-472d-a640-0b1b64b934fe req-fbf35a8b-d1a9-4de9-8c48-c94683ad8a4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.246 239942 DEBUG nova.compute.manager [req-21f679ff-a65b-472d-a640-0b1b64b934fe req-fbf35a8b-d1a9-4de9-8c48-c94683ad8a4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Processing event network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.247 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.251 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.257 239942 INFO nova.virt.libvirt.driver [-] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Instance spawned successfully.#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.258 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.272 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.273 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834866.2504601, 2de06a6e-707c-434b-980d-ab52c01abb9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.273 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.329 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.337 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.341 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.341 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.342 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.342 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.343 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.343 239942 DEBUG nova.virt.libvirt.driver [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.400 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.400 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.416 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:47:46 np0005603435 podman[251399]: 2026-01-31 04:47:46.431050926 +0000 UTC m=+0.074107590 container create ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 30 23:47:46 np0005603435 systemd[1]: Started libpod-conmon-ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106.scope.
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.486 239942 INFO nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Took 7.38 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:47:46 np0005603435 podman[251399]: 2026-01-31 04:47:46.394452317 +0000 UTC m=+0.037509031 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.487 239942 DEBUG nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:47:46 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:47:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fb8a6fae0118f9ef751265f619bb545d72febd9c22cbc9f8fd0e244ebce819/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:46 np0005603435 podman[251399]: 2026-01-31 04:47:46.514106886 +0000 UTC m=+0.157163510 container init ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:47:46 np0005603435 podman[251399]: 2026-01-31 04:47:46.520405496 +0000 UTC m=+0.163462160 container start ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.551 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834866.5508628, 80f921cb-ec48-41f8-88b0-3ba2a51efd0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.551 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] VM Started (Lifecycle Event)#033[00m
Jan 30 23:47:46 np0005603435 neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a[251455]: [NOTICE]   (251460) : New worker (251462) forked
Jan 30 23:47:46 np0005603435 neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a[251455]: [NOTICE]   (251460) : Loading success.
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.567 239942 INFO nova.compute.manager [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Took 8.77 seconds to build instance.#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.576 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.579 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834866.5511687, 80f921cb-ec48-41f8-88b0-3ba2a51efd0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.579 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.583 239942 DEBUG oslo_concurrency.lockutils [None req-d8b460b6-43eb-4f48-ac36-325adcc2966c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.588 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb in datapath 28e37664-8d81-4a45-8e12-f0b45b43b4cf unbound from our chassis#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.590 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28e37664-8d81-4a45-8e12-f0b45b43b4cf#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.593 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.597 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.603 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e12c6afd-fab6-4c60-a092-7199c77d24e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.604 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28e37664-81 in ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.606 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28e37664-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.606 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[563e6d1f-04e3-4465-872a-bc5f1133c382]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.608 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[15d4c1ad-b2f8-4538-a6d7-8bdc584bef5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.619 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.619 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbda389-35fe-4510-98b7-cec8bdd37f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.642 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[befbade3-fdd6-4209-94fc-d8a145d391f2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.672 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[35e3ac5e-2d2e-46ef-b602-5db01177150d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.676 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[637e332f-e518-4665-bc87-3f5cd91cc5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 NetworkManager[49097]: <info>  [1769834866.6790] manager: (tap28e37664-80): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.710 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[24bb373e-b07c-4064-9170-5a3af10d4582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.715 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fd478d-e172-4a44-a98b-fa4e1e462b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 NetworkManager[49097]: <info>  [1769834866.7341] device (tap28e37664-80): carrier: link connected
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.737 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[99826379-ef61-481f-9cd5-bd0127d3a470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.755 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d856e635-f5c9-477c-a813-8871d79273c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28e37664-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:46:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392113, 'reachable_time': 17055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251482, 'error': None, 'target': 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.768 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[308f75d4-86b0-419b-8e0a-5e4cc5d7e7e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:46c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 392113, 'tstamp': 392113}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251483, 'error': None, 'target': 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.786 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4754efa7-e47a-41db-96f6-f2f939a9aafe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28e37664-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:46:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392113, 'reachable_time': 17055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251484, 'error': None, 'target': 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.813 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cb37f905-ca3e-4401-826d-73e3e4e80c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.851 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9be827c5-5bbc-4d0c-ba36-38cdc6fdef4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.855 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28e37664-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.855 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.856 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28e37664-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:46 np0005603435 NetworkManager[49097]: <info>  [1769834866.8587] manager: (tap28e37664-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 30 23:47:46 np0005603435 kernel: tap28e37664-80: entered promiscuous mode
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.860 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.860 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28e37664-80, col_values=(('external_ids', {'iface-id': '17a6f891-9bce-4b37-a6eb-eb44f21f3bd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:46Z|00061|binding|INFO|Releasing lport 17a6f891-9bce-4b37-a6eb-eb44f21f3bd7 from this chassis (sb_readonly=0)
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.864 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28e37664-8d81-4a45-8e12-f0b45b43b4cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28e37664-8d81-4a45-8e12-f0b45b43b4cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.865 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[042e2531-bb7e-4249-bba1-fa9d23d82d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.866 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-28e37664-8d81-4a45-8e12-f0b45b43b4cf
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/28e37664-8d81-4a45-8e12-f0b45b43b4cf.pid.haproxy
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 28e37664-8d81-4a45-8e12-f0b45b43b4cf
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:47:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:46.866 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'env', 'PROCESS_TAG=haproxy-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28e37664-8d81-4a45-8e12-f0b45b43b4cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:47:46 np0005603435 nova_compute[239938]: 2026-01-31 04:47:46.870 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:47 np0005603435 podman[251516]: 2026-01-31 04:47:47.227694629 +0000 UTC m=+0.047408426 container create 5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:47:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Jan 30 23:47:47 np0005603435 systemd[1]: Started libpod-conmon-5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12.scope.
Jan 30 23:47:47 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:47:47 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03338233df5ece9ce7025b51c69f5f38fa0cb788aa135d4bc252ee2890b81fe3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:47 np0005603435 podman[251516]: 2026-01-31 04:47:47.294201047 +0000 UTC m=+0.113914874 container init 5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:47:47 np0005603435 podman[251516]: 2026-01-31 04:47:47.299215496 +0000 UTC m=+0.118929293 container start 5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 30 23:47:47 np0005603435 podman[251516]: 2026-01-31 04:47:47.201971519 +0000 UTC m=+0.021685336 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:47:47 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[251539]: [NOTICE]   (251570) : New worker (251577) forked
Jan 30 23:47:47 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[251539]: [NOTICE]   (251570) : Loading success.
Jan 30 23:47:47 np0005603435 podman[251530]: 2026-01-31 04:47:47.329437923 +0000 UTC m=+0.071990689 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 30 23:47:47 np0005603435 podman[251533]: 2026-01-31 04:47:47.335156329 +0000 UTC m=+0.074486439 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 30 23:47:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:47.354 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.422 239942 DEBUG nova.compute.manager [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.422 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.423 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.423 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.423 239942 DEBUG nova.compute.manager [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] No waiting events found dispatching network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.423 239942 WARNING nova.compute.manager [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received unexpected event network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c for instance with vm_state active and task_state None.#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.424 239942 DEBUG nova.compute.manager [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received event network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.424 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.424 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.424 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.424 239942 DEBUG nova.compute.manager [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Processing event network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.425 239942 DEBUG nova.compute.manager [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received event network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.425 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.425 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.425 239942 DEBUG oslo_concurrency.lockutils [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.425 239942 DEBUG nova.compute.manager [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] No waiting events found dispatching network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.426 239942 WARNING nova.compute.manager [req-155cb381-2bb3-4b28-ba21-9700cd95cc85 req-eb8264b0-471f-4269-a789-1ae48f78206b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received unexpected event network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb for instance with vm_state building and task_state spawning.#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.426 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.442 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834868.4301045, 80f921cb-ec48-41f8-88b0-3ba2a51efd0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.444 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.449 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.453 239942 INFO nova.virt.libvirt.driver [-] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Instance spawned successfully.#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.453 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.467 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.476 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.478 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.479 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.479 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.479 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.480 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.480 239942 DEBUG nova.virt.libvirt.driver [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:47:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:47:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5026 writes, 22K keys, 5026 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 5026 writes, 5026 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1622 writes, 7289 keys, 1622 commit groups, 1.0 writes per commit group, ingest: 10.07 MB, 0.02 MB/s#012Interval WAL: 1622 writes, 1622 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.9      0.49              0.08        12    0.041       0      0       0.0       0.0#012  L6      1/0    7.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    103.8     85.5      0.94              0.24        11    0.085     49K   5814       0.0       0.0#012 Sum      1/0    7.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     68.2     73.3      1.43              0.32        23    0.062     49K   5814       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6     48.0     48.0      0.96              0.14        10    0.096     24K   2613       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    103.8     85.5      0.94              0.24        11    0.085     49K   5814       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     52.7      0.46              0.08        11    0.042       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.1      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.1 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 1.4 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5573585118d0#2 capacity: 304.00 MB usage: 9.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(563,8.96 MB,2.94721%) FilterBlock(24,144.80 KB,0.0465142%) IndexBlock(24,275.95 KB,0.0886465%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.507 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.536 239942 INFO nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Took 8.60 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.539 239942 DEBUG nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.614 239942 INFO nova.compute.manager [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Took 10.81 seconds to build instance.#033[00m
Jan 30 23:47:48 np0005603435 nova_compute[239938]: 2026-01-31 04:47:48.630 239942 DEBUG oslo_concurrency.lockutils [None req-2b183ff6-7e6d-4a99-b00b-3f894636385c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:49 np0005603435 nova_compute[239938]: 2026-01-31 04:47:49.174 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 648 KiB/s rd, 3.6 MiB/s wr, 91 op/s
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <info>  [1769834869.3617] manager: (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/41)
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <info>  [1769834869.3622] device (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <warn>  [1769834869.3623] device (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <info>  [1769834869.3629] manager: (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/42)
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <info>  [1769834869.3632] device (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <warn>  [1769834869.3633] device (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <info>  [1769834869.3639] manager: (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <info>  [1769834869.3644] manager: (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <info>  [1769834869.3648] device (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 30 23:47:49 np0005603435 NetworkManager[49097]: <info>  [1769834869.3651] device (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 30 23:47:49 np0005603435 nova_compute[239938]: 2026-01-31 04:47:49.365 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:49 np0005603435 nova_compute[239938]: 2026-01-31 04:47:49.440 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:49 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:49Z|00062|binding|INFO|Releasing lport 41e5b095-6a71-4bf4-9ca9-06fa02387e1d from this chassis (sb_readonly=0)
Jan 30 23:47:49 np0005603435 ovn_controller[145670]: 2026-01-31T04:47:49Z|00063|binding|INFO|Releasing lport 17a6f891-9bce-4b37-a6eb-eb44f21f3bd7 from this chassis (sb_readonly=0)
Jan 30 23:47:49 np0005603435 nova_compute[239938]: 2026-01-31 04:47:49.465 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:47:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:47:50 np0005603435 podman[251734]: 2026-01-31 04:47:50.531161914 +0000 UTC m=+0.026660684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:47:50 np0005603435 podman[251734]: 2026-01-31 04:47:50.631556426 +0000 UTC m=+0.127055216 container create 8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:47:50 np0005603435 systemd[1]: Started libpod-conmon-8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594.scope.
Jan 30 23:47:50 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:47:50 np0005603435 nova_compute[239938]: 2026-01-31 04:47:50.810 239942 DEBUG nova.compute.manager [req-d9a6c324-177a-4303-b732-893f63452ca5 req-3873ff96-fc0a-4e34-85f3-0d4325c30365 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event network-changed-f1498a6d-42eb-444b-9b53-825529f5cb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:47:50 np0005603435 podman[251734]: 2026-01-31 04:47:50.811864434 +0000 UTC m=+0.307363214 container init 8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:47:50 np0005603435 nova_compute[239938]: 2026-01-31 04:47:50.811 239942 DEBUG nova.compute.manager [req-d9a6c324-177a-4303-b732-893f63452ca5 req-3873ff96-fc0a-4e34-85f3-0d4325c30365 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Refreshing instance network info cache due to event network-changed-f1498a6d-42eb-444b-9b53-825529f5cb1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:47:50 np0005603435 nova_compute[239938]: 2026-01-31 04:47:50.813 239942 DEBUG oslo_concurrency.lockutils [req-d9a6c324-177a-4303-b732-893f63452ca5 req-3873ff96-fc0a-4e34-85f3-0d4325c30365 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:47:50 np0005603435 nova_compute[239938]: 2026-01-31 04:47:50.813 239942 DEBUG oslo_concurrency.lockutils [req-d9a6c324-177a-4303-b732-893f63452ca5 req-3873ff96-fc0a-4e34-85f3-0d4325c30365 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:47:50 np0005603435 nova_compute[239938]: 2026-01-31 04:47:50.813 239942 DEBUG nova.network.neutron [req-d9a6c324-177a-4303-b732-893f63452ca5 req-3873ff96-fc0a-4e34-85f3-0d4325c30365 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Refreshing network info cache for port f1498a6d-42eb-444b-9b53-825529f5cb1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:47:50 np0005603435 podman[251734]: 2026-01-31 04:47:50.820927369 +0000 UTC m=+0.316426119 container start 8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:47:50 np0005603435 modest_burnell[251751]: 167 167
Jan 30 23:47:50 np0005603435 systemd[1]: libpod-8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594.scope: Deactivated successfully.
Jan 30 23:47:50 np0005603435 conmon[251751]: conmon 8b9837af1285a4f673d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594.scope/container/memory.events
Jan 30 23:47:50 np0005603435 podman[251734]: 2026-01-31 04:47:50.971704837 +0000 UTC m=+0.467203627 container attach 8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_burnell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:47:50 np0005603435 podman[251734]: 2026-01-31 04:47:50.972895775 +0000 UTC m=+0.468394555 container died 8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:47:51 np0005603435 nova_compute[239938]: 2026-01-31 04:47:51.002 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:47:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:47:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:47:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Jan 30 23:47:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:51 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ceed8f1db57e01e2b98a40542999c33930b4aa39e6172a8b3bb0589e5640a7fa-merged.mount: Deactivated successfully.
Jan 30 23:47:51 np0005603435 podman[251734]: 2026-01-31 04:47:51.871564088 +0000 UTC m=+1.367062848 container remove 8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:47:51 np0005603435 systemd[1]: libpod-conmon-8b9837af1285a4f673d1b67238d770279b7fbe898a0ba9e5294326b37eabf594.scope: Deactivated successfully.
Jan 30 23:47:52 np0005603435 podman[251775]: 2026-01-31 04:47:52.027265163 +0000 UTC m=+0.018152492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:47:52 np0005603435 podman[251775]: 2026-01-31 04:47:52.190348482 +0000 UTC m=+0.181235791 container create 4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:47:52 np0005603435 systemd[1]: Started libpod-conmon-4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18.scope.
Jan 30 23:47:52 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:47:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9810db94eec8de3eb567b6ea8eceedd715938bd9225e2f3cef4f0b1aadc5bea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9810db94eec8de3eb567b6ea8eceedd715938bd9225e2f3cef4f0b1aadc5bea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9810db94eec8de3eb567b6ea8eceedd715938bd9225e2f3cef4f0b1aadc5bea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9810db94eec8de3eb567b6ea8eceedd715938bd9225e2f3cef4f0b1aadc5bea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9810db94eec8de3eb567b6ea8eceedd715938bd9225e2f3cef4f0b1aadc5bea8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:52 np0005603435 podman[251775]: 2026-01-31 04:47:52.416656202 +0000 UTC m=+0.407543531 container init 4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 30 23:47:52 np0005603435 podman[251775]: 2026-01-31 04:47:52.422355098 +0000 UTC m=+0.413242397 container start 4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 30 23:47:52 np0005603435 podman[251775]: 2026-01-31 04:47:52.453400344 +0000 UTC m=+0.444287683 container attach 4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:47:52 np0005603435 romantic_booth[251792]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:47:52 np0005603435 romantic_booth[251792]: --> All data devices are unavailable
Jan 30 23:47:52 np0005603435 systemd[1]: libpod-4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18.scope: Deactivated successfully.
Jan 30 23:47:52 np0005603435 podman[251775]: 2026-01-31 04:47:52.856783626 +0000 UTC m=+0.847670925 container died 4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 30 23:47:52 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9810db94eec8de3eb567b6ea8eceedd715938bd9225e2f3cef4f0b1aadc5bea8-merged.mount: Deactivated successfully.
Jan 30 23:47:53 np0005603435 nova_compute[239938]: 2026-01-31 04:47:53.003 239942 DEBUG nova.compute.manager [req-652f006c-9647-4cf4-84ba-57e37999f0d9 req-2af00af6-c89b-4d9a-9da6-35eeda71818a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received event network-changed-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:47:53 np0005603435 nova_compute[239938]: 2026-01-31 04:47:53.008 239942 DEBUG nova.compute.manager [req-652f006c-9647-4cf4-84ba-57e37999f0d9 req-2af00af6-c89b-4d9a-9da6-35eeda71818a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Refreshing instance network info cache due to event network-changed-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:47:53 np0005603435 nova_compute[239938]: 2026-01-31 04:47:53.009 239942 DEBUG oslo_concurrency.lockutils [req-652f006c-9647-4cf4-84ba-57e37999f0d9 req-2af00af6-c89b-4d9a-9da6-35eeda71818a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:47:53 np0005603435 nova_compute[239938]: 2026-01-31 04:47:53.010 239942 DEBUG oslo_concurrency.lockutils [req-652f006c-9647-4cf4-84ba-57e37999f0d9 req-2af00af6-c89b-4d9a-9da6-35eeda71818a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:47:53 np0005603435 nova_compute[239938]: 2026-01-31 04:47:53.010 239942 DEBUG nova.network.neutron [req-652f006c-9647-4cf4-84ba-57e37999f0d9 req-2af00af6-c89b-4d9a-9da6-35eeda71818a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Refreshing network info cache for port 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:47:53 np0005603435 podman[251775]: 2026-01-31 04:47:53.154544141 +0000 UTC m=+1.145431450 container remove 4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:47:53 np0005603435 systemd[1]: libpod-conmon-4ad01ae46585ef4f9abd41a1d32966c513c2503c471915c8d8ecd0a9e4072a18.scope: Deactivated successfully.
Jan 30 23:47:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Jan 30 23:47:53 np0005603435 nova_compute[239938]: 2026-01-31 04:47:53.383 239942 DEBUG nova.network.neutron [req-d9a6c324-177a-4303-b732-893f63452ca5 req-3873ff96-fc0a-4e34-85f3-0d4325c30365 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updated VIF entry in instance network info cache for port f1498a6d-42eb-444b-9b53-825529f5cb1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:47:53 np0005603435 nova_compute[239938]: 2026-01-31 04:47:53.384 239942 DEBUG nova.network.neutron [req-d9a6c324-177a-4303-b732-893f63452ca5 req-3873ff96-fc0a-4e34-85f3-0d4325c30365 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updating instance_info_cache with network_info: [{"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:47:53 np0005603435 nova_compute[239938]: 2026-01-31 04:47:53.401 239942 DEBUG oslo_concurrency.lockutils [req-d9a6c324-177a-4303-b732-893f63452ca5 req-3873ff96-fc0a-4e34-85f3-0d4325c30365 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:47:53 np0005603435 podman[251886]: 2026-01-31 04:47:53.693666084 +0000 UTC m=+0.034229793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:47:54 np0005603435 podman[251886]: 2026-01-31 04:47:54.111305954 +0000 UTC m=+0.451869633 container create 786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 30 23:47:54 np0005603435 nova_compute[239938]: 2026-01-31 04:47:54.177 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:54 np0005603435 nova_compute[239938]: 2026-01-31 04:47:54.180 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:54 np0005603435 systemd[1]: Started libpod-conmon-786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683.scope.
Jan 30 23:47:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:47:54 np0005603435 podman[251886]: 2026-01-31 04:47:54.403591499 +0000 UTC m=+0.744155168 container init 786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:47:54 np0005603435 podman[251886]: 2026-01-31 04:47:54.414416616 +0000 UTC m=+0.754980275 container start 786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:47:54 np0005603435 quizzical_grothendieck[251900]: 167 167
Jan 30 23:47:54 np0005603435 systemd[1]: libpod-786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683.scope: Deactivated successfully.
Jan 30 23:47:54 np0005603435 podman[251886]: 2026-01-31 04:47:54.439216455 +0000 UTC m=+0.779780114 container attach 786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:47:54 np0005603435 podman[251886]: 2026-01-31 04:47:54.440492355 +0000 UTC m=+0.781055984 container died 786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:47:54 np0005603435 nova_compute[239938]: 2026-01-31 04:47:54.541 239942 DEBUG nova.network.neutron [req-652f006c-9647-4cf4-84ba-57e37999f0d9 req-2af00af6-c89b-4d9a-9da6-35eeda71818a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Updated VIF entry in instance network info cache for port 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:47:54 np0005603435 nova_compute[239938]: 2026-01-31 04:47:54.541 239942 DEBUG nova.network.neutron [req-652f006c-9647-4cf4-84ba-57e37999f0d9 req-2af00af6-c89b-4d9a-9da6-35eeda71818a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Updating instance_info_cache with network_info: [{"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:47:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-82a126c230641c5096b26f79d336acef908cef299edf763b6088cb289875b3cb-merged.mount: Deactivated successfully.
Jan 30 23:47:54 np0005603435 nova_compute[239938]: 2026-01-31 04:47:54.607 239942 DEBUG oslo_concurrency.lockutils [req-652f006c-9647-4cf4-84ba-57e37999f0d9 req-2af00af6-c89b-4d9a-9da6-35eeda71818a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-80f921cb-ec48-41f8-88b0-3ba2a51efd0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:47:54 np0005603435 podman[251886]: 2026-01-31 04:47:54.65228573 +0000 UTC m=+0.992849369 container remove 786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 30 23:47:54 np0005603435 systemd[1]: libpod-conmon-786805af328ab2e819d8f33f83a34a05dfa4efd44aaf80d34f39fbcb45f25683.scope: Deactivated successfully.
Jan 30 23:47:54 np0005603435 podman[251926]: 2026-01-31 04:47:54.82294317 +0000 UTC m=+0.035453592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:47:54 np0005603435 podman[251926]: 2026-01-31 04:47:54.931762472 +0000 UTC m=+0.144272834 container create 4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:47:54 np0005603435 systemd[1]: Started libpod-conmon-4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8.scope.
Jan 30 23:47:55 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:47:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79e4a011dee55144eab9af393f9b302794fafbc13905b6dd8821ff410ed7855/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79e4a011dee55144eab9af393f9b302794fafbc13905b6dd8821ff410ed7855/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79e4a011dee55144eab9af393f9b302794fafbc13905b6dd8821ff410ed7855/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79e4a011dee55144eab9af393f9b302794fafbc13905b6dd8821ff410ed7855/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:47:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 959 KiB/s wr, 149 op/s
Jan 30 23:47:55 np0005603435 podman[251926]: 2026-01-31 04:47:55.390208259 +0000 UTC m=+0.602718621 container init 4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:47:55 np0005603435 podman[251926]: 2026-01-31 04:47:55.400261508 +0000 UTC m=+0.612771850 container start 4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]: {
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:    "0": [
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:        {
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "devices": [
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "/dev/loop3"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            ],
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_name": "ceph_lv0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_size": "21470642176",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "name": "ceph_lv0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "tags": {
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cluster_name": "ceph",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.crush_device_class": "",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.encrypted": "0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.objectstore": "bluestore",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osd_id": "0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.type": "block",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.vdo": "0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.with_tpm": "0"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            },
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "type": "block",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "vg_name": "ceph_vg0"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:        }
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:    ],
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:    "1": [
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:        {
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "devices": [
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "/dev/loop4"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            ],
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_name": "ceph_lv1",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_size": "21470642176",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "name": "ceph_lv1",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "tags": {
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cluster_name": "ceph",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.crush_device_class": "",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.encrypted": "0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.objectstore": "bluestore",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osd_id": "1",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.type": "block",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.vdo": "0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.with_tpm": "0"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            },
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "type": "block",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "vg_name": "ceph_vg1"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:        }
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:    ],
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:    "2": [
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:        {
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "devices": [
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "/dev/loop5"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            ],
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_name": "ceph_lv2",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_size": "21470642176",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "name": "ceph_lv2",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "tags": {
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.cluster_name": "ceph",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.crush_device_class": "",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.encrypted": "0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.objectstore": "bluestore",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osd_id": "2",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.type": "block",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.vdo": "0",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:                "ceph.with_tpm": "0"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            },
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "type": "block",
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:            "vg_name": "ceph_vg2"
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:        }
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]:    ]
Jan 30 23:47:55 np0005603435 suspicious_goldstine[251942]: }
Jan 30 23:47:55 np0005603435 podman[251926]: 2026-01-31 04:47:55.678305235 +0000 UTC m=+0.890815607 container attach 4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:47:55 np0005603435 systemd[1]: libpod-4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8.scope: Deactivated successfully.
Jan 30 23:47:55 np0005603435 podman[251926]: 2026-01-31 04:47:55.694659803 +0000 UTC m=+0.907170175 container died 4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:47:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:55.912 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:47:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:55.914 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:47:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:55.915 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:47:56 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b79e4a011dee55144eab9af393f9b302794fafbc13905b6dd8821ff410ed7855-merged.mount: Deactivated successfully.
Jan 30 23:47:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:47:56.356 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:47:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:47:56 np0005603435 podman[251926]: 2026-01-31 04:47:56.702784864 +0000 UTC m=+1.915295226 container remove 4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_goldstine, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:47:56 np0005603435 systemd[1]: libpod-conmon-4bf634ada539d76ab0aaeeb8c9d0e525f410e2d6b04bfb8f85ac334dd2b321b8.scope: Deactivated successfully.
Jan 30 23:47:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 29 KiB/s wr, 151 op/s
Jan 30 23:47:57 np0005603435 podman[252027]: 2026-01-31 04:47:57.188991821 +0000 UTC m=+0.037954271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:47:57 np0005603435 podman[252027]: 2026-01-31 04:47:57.490980987 +0000 UTC m=+0.339943407 container create 9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:47:57 np0005603435 systemd[1]: Started libpod-conmon-9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0.scope.
Jan 30 23:47:57 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:47:58 np0005603435 podman[252027]: 2026-01-31 04:47:58.649469836 +0000 UTC m=+1.498432286 container init 9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:47:58 np0005603435 podman[252027]: 2026-01-31 04:47:58.654978287 +0000 UTC m=+1.503940737 container start 9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:47:58 np0005603435 hardcore_noyce[252044]: 167 167
Jan 30 23:47:58 np0005603435 systemd[1]: libpod-9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0.scope: Deactivated successfully.
Jan 30 23:47:58 np0005603435 podman[252027]: 2026-01-31 04:47:58.937621723 +0000 UTC m=+1.786584163 container attach 9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_noyce, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:47:58 np0005603435 podman[252027]: 2026-01-31 04:47:58.938389681 +0000 UTC m=+1.787352121 container died 9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_noyce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 30 23:47:59 np0005603435 nova_compute[239938]: 2026-01-31 04:47:59.177 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:59 np0005603435 nova_compute[239938]: 2026-01-31 04:47:59.180 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:47:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 135 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 192 KiB/s wr, 157 op/s
Jan 30 23:48:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b54ed8640181b997c8d86b92e976634e4c8e9ba79a0dcf93bc509f5beab99bf5-merged.mount: Deactivated successfully.
Jan 30 23:48:00 np0005603435 podman[252027]: 2026-01-31 04:48:00.826170904 +0000 UTC m=+3.675133354 container remove 9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:48:00 np0005603435 systemd[1]: libpod-conmon-9d83c0c3af89b5cf922ceb379309c95d477b3aa7a4d5d2d1017024ba4b2ac5d0.scope: Deactivated successfully.
Jan 30 23:48:01 np0005603435 podman[252070]: 2026-01-31 04:48:01.009487414 +0000 UTC m=+0.028495177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:48:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 139 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 855 KiB/s wr, 136 op/s
Jan 30 23:48:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:01 np0005603435 podman[252070]: 2026-01-31 04:48:01.742909408 +0000 UTC m=+0.761917121 container create adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:48:01 np0005603435 systemd[1]: Started libpod-conmon-adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6.scope.
Jan 30 23:48:01 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:48:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e306addbf7026195c6d6f9df3254679eb33144ec50af1970fda31d29d5c5666d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:48:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e306addbf7026195c6d6f9df3254679eb33144ec50af1970fda31d29d5c5666d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:48:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e306addbf7026195c6d6f9df3254679eb33144ec50af1970fda31d29d5c5666d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:48:01 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e306addbf7026195c6d6f9df3254679eb33144ec50af1970fda31d29d5c5666d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:48:02 np0005603435 podman[252070]: 2026-01-31 04:48:02.254536967 +0000 UTC m=+1.273544710 container init adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 30 23:48:02 np0005603435 podman[252070]: 2026-01-31 04:48:02.265489557 +0000 UTC m=+1.284497220 container start adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brahmagupta, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:48:02 np0005603435 podman[252070]: 2026-01-31 04:48:02.41315583 +0000 UTC m=+1.432163553 container attach adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:48:03 np0005603435 lvm[252165]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:48:03 np0005603435 lvm[252165]: VG ceph_vg0 finished
Jan 30 23:48:03 np0005603435 lvm[252166]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:48:03 np0005603435 lvm[252166]: VG ceph_vg1 finished
Jan 30 23:48:03 np0005603435 lvm[252168]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:48:03 np0005603435 lvm[252168]: VG ceph_vg2 finished
Jan 30 23:48:03 np0005603435 elastic_brahmagupta[252086]: {}
Jan 30 23:48:03 np0005603435 systemd[1]: libpod-adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6.scope: Deactivated successfully.
Jan 30 23:48:03 np0005603435 systemd[1]: libpod-adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6.scope: Consumed 1.273s CPU time.
Jan 30 23:48:03 np0005603435 podman[252070]: 2026-01-31 04:48:03.206875304 +0000 UTC m=+2.225882987 container died adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:48:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 171 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 122 op/s
Jan 30 23:48:03 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:03Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:59:81:a2 10.100.0.8
Jan 30 23:48:03 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:03Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:81:a2 10.100.0.8
Jan 30 23:48:03 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e306addbf7026195c6d6f9df3254679eb33144ec50af1970fda31d29d5c5666d-merged.mount: Deactivated successfully.
Jan 30 23:48:03 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:03Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:66:d6 10.100.0.5
Jan 30 23:48:03 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:03Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:66:d6 10.100.0.5
Jan 30 23:48:03 np0005603435 podman[252070]: 2026-01-31 04:48:03.813721743 +0000 UTC m=+2.832729466 container remove adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 30 23:48:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:48:03 np0005603435 systemd[1]: libpod-conmon-adc81860c5391f5bf1985ae2b0076e6cc43a865744dab497f6673331219d97a6.scope: Deactivated successfully.
Jan 30 23:48:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:48:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:48:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:48:04 np0005603435 nova_compute[239938]: 2026-01-31 04:48:04.178 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:04 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:48:04 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:48:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 184 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 82 op/s
Jan 30 23:48:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2715583151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:48:06
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['images', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log']
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:48:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 30 23:48:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 30 23:48:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 30 23:48:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:48:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:48:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 212 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 147 op/s
Jan 30 23:48:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 30 23:48:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 30 23:48:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 30 23:48:08 np0005603435 nova_compute[239938]: 2026-01-31 04:48:08.840 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:08 np0005603435 nova_compute[239938]: 2026-01-31 04:48:08.840 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:08 np0005603435 nova_compute[239938]: 2026-01-31 04:48:08.854 239942 DEBUG nova.objects.instance [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'flavor' on Instance uuid 80f921cb-ec48-41f8-88b0-3ba2a51efd0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:08 np0005603435 nova_compute[239938]: 2026-01-31 04:48:08.893 239942 INFO nova.virt.libvirt.driver [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Ignoring supplied device name: /dev/vdb#033[00m
Jan 30 23:48:08 np0005603435 nova_compute[239938]: 2026-01-31 04:48:08.910 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:09 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.113 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:09 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.114 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:09 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.114 239942 INFO nova.compute.manager [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Attaching volume bf0dd5a1-82e1-4475-b307-d15eb141c304 to /dev/vdb#033[00m
Jan 30 23:48:09 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.211 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 266 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 8.7 MiB/s wr, 227 op/s
Jan 30 23:48:09 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.340 239942 DEBUG os_brick.utils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:48:09 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.342 239942 INFO oslo.privsep.daemon [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmph40ywjwj/privsep.sock']#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.064 239942 INFO oslo.privsep.daemon [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.941 252212 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.944 252212 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.947 252212 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:09.947 252212 INFO oslo.privsep.daemon [-] privsep daemon running as pid 252212#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.068 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[390f3994-0578-4c25-a69e-7011be447624]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.200 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.293 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.293 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[ff69fc6b-ca0b-4b98-b9e4-d88ea9258b36]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.295 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.304 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.304 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[a76d463e-7136-47ae-9bec-65759c4619bf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.306 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.381 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.381 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[7295c6b0-4529-46c6-8722-c341e6fa2f2b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.385 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[243b8825-e745-4417-94ef-fcb0952c1c91]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.386 239942 DEBUG oslo_concurrency.processutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.410 239942 DEBUG oslo_concurrency.processutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.415 239942 DEBUG os_brick.initiator.connectors.lightos [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.416 239942 DEBUG os_brick.initiator.connectors.lightos [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.417 239942 DEBUG os_brick.initiator.connectors.lightos [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.417 239942 DEBUG os_brick.utils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] <== get_connector_properties: return (1076ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.418 239942 DEBUG nova.virt.block_device [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Updating existing volume attachment record: 19d93395-50fa-431a-b806-b643c7ff703e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:48:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1547735900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:10 np0005603435 nova_compute[239938]: 2026-01-31 04:48:10.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1448346253' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.213 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.214 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.216 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.223 239942 DEBUG nova.objects.instance [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'flavor' on Instance uuid 80f921cb-ec48-41f8-88b0-3ba2a51efd0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 272 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.7 MiB/s wr, 196 op/s
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.245 239942 DEBUG nova.virt.libvirt.driver [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Attempting to attach volume bf0dd5a1-82e1-4475-b307-d15eb141c304 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.249 239942 DEBUG nova.virt.libvirt.guest [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:48:11 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:48:11 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-bf0dd5a1-82e1-4475-b307-d15eb141c304">
Jan 30 23:48:11 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:48:11 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:48:11 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:48:11 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:48:11 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:48:11 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:48:11 np0005603435 nova_compute[239938]:  <serial>bf0dd5a1-82e1-4475-b307-d15eb141c304</serial>
Jan 30 23:48:11 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:48:11 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.366 239942 DEBUG nova.virt.libvirt.driver [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.366 239942 DEBUG nova.virt.libvirt.driver [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.366 239942 DEBUG nova.virt.libvirt.driver [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.366 239942 DEBUG nova.virt.libvirt.driver [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No VIF found with MAC fa:16:3e:59:81:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.579 239942 DEBUG oslo_concurrency.lockutils [None req-60bcc033-c398-4da6-b91c-81261045ccb3 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 30 23:48:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 30 23:48:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 30 23:48:11 np0005603435 nova_compute[239938]: 2026-01-31 04:48:11.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:12 np0005603435 nova_compute[239938]: 2026-01-31 04:48:12.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:12 np0005603435 nova_compute[239938]: 2026-01-31 04:48:12.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:48:12 np0005603435 nova_compute[239938]: 2026-01-31 04:48:12.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:48:13 np0005603435 nova_compute[239938]: 2026-01-31 04:48:13.145 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:48:13 np0005603435 nova_compute[239938]: 2026-01-31 04:48:13.145 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:48:13 np0005603435 nova_compute[239938]: 2026-01-31 04:48:13.146 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 30 23:48:13 np0005603435 nova_compute[239938]: 2026-01-31 04:48:13.146 239942 DEBUG nova.objects.instance [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2de06a6e-707c-434b-980d-ab52c01abb9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 331 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 7.7 MiB/s wr, 180 op/s
Jan 30 23:48:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3866363097' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 30 23:48:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 30 23:48:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.185 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.215 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.280 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updating instance_info_cache with network_info: [{"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.296 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-2de06a6e-707c-434b-980d-ab52c01abb9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.297 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.298 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.298 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.298 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:48:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 30 23:48:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 30 23:48:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.914 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.938 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.938 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.938 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.939 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:48:14 np0005603435 nova_compute[239938]: 2026-01-31 04:48:14.939 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 339 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.0 MiB/s wr, 120 op/s
Jan 30 23:48:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:48:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3024255829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.575 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.654 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.655 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.655 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.661 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.661 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.864 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.865 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4245MB free_disk=59.89710176829249GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.865 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.866 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.982 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 2de06a6e-707c-434b-980d-ab52c01abb9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.983 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 80f921cb-ec48-41f8-88b0-3ba2a51efd0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.983 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:48:15 np0005603435 nova_compute[239938]: 2026-01-31 04:48:15.983 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:48:16 np0005603435 nova_compute[239938]: 2026-01-31 04:48:16.048 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:48:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2023431635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:48:16 np0005603435 nova_compute[239938]: 2026-01-31 04:48:16.589 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:16 np0005603435 nova_compute[239938]: 2026-01-31 04:48:16.600 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:48:16 np0005603435 nova_compute[239938]: 2026-01-31 04:48:16.617 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:48:16 np0005603435 nova_compute[239938]: 2026-01-31 04:48:16.639 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:48:16 np0005603435 nova_compute[239938]: 2026-01-31 04:48:16.639 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 30 23:48:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 30 23:48:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015199568508083858 of space, bias 1.0, pg target 0.45598705524251576 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006956011163634917 of space, bias 1.0, pg target 0.2086803349090475 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.000346570242551215 of space, bias 1.0, pg target 0.10397107276536449 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664695237975665 of space, bias 1.0, pg target 0.19994085713926993 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.465542888101085e-07 of space, bias 4.0, pg target 0.0008958651465721302 quantized to 16 (current 16)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:48:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 339 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 57 op/s
Jan 30 23:48:17 np0005603435 nova_compute[239938]: 2026-01-31 04:48:17.613 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:18 np0005603435 podman[252286]: 2026-01-31 04:48:18.11598709 +0000 UTC m=+0.074024038 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 30 23:48:18 np0005603435 podman[252287]: 2026-01-31 04:48:18.142287524 +0000 UTC m=+0.100331332 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.631 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquiring lock "62dcf699-1417-4b1e-b107-3527e61c68a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.631 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.648 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.728 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.728 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.740 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.740 239942 INFO nova.compute.claims [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.862 239942 DEBUG oslo_concurrency.lockutils [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.863 239942 DEBUG oslo_concurrency.lockutils [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.880 239942 INFO nova.compute.manager [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Detaching volume bf0dd5a1-82e1-4475-b307-d15eb141c304#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:48:18 np0005603435 nova_compute[239938]: 2026-01-31 04:48:18.890 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.025 239942 INFO nova.virt.block_device [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Attempting to driver detach volume bf0dd5a1-82e1-4475-b307-d15eb141c304 from mountpoint /dev/vdb#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.035 239942 DEBUG nova.virt.libvirt.driver [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Attempting to detach device vdb from instance 80f921cb-ec48-41f8-88b0-3ba2a51efd0c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.036 239942 DEBUG nova.virt.libvirt.guest [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-bf0dd5a1-82e1-4475-b307-d15eb141c304">
Jan 30 23:48:19 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <serial>bf0dd5a1-82e1-4475-b307-d15eb141c304</serial>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:48:19 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:48:19 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.187 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.217 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.240 239942 INFO nova.virt.libvirt.driver [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Successfully detached device vdb from instance 80f921cb-ec48-41f8-88b0-3ba2a51efd0c from the persistent domain config.#033[00m
Jan 30 23:48:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 339 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 792 KiB/s wr, 89 op/s
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.241 239942 DEBUG nova.virt.libvirt.driver [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 80f921cb-ec48-41f8-88b0-3ba2a51efd0c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.241 239942 DEBUG nova.virt.libvirt.guest [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-bf0dd5a1-82e1-4475-b307-d15eb141c304">
Jan 30 23:48:19 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <serial>bf0dd5a1-82e1-4475-b307-d15eb141c304</serial>
Jan 30 23:48:19 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:48:19 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:48:19 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.365 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769834899.3651495, 80f921cb-ec48-41f8-88b0-3ba2a51efd0c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.368 239942 DEBUG nova.virt.libvirt.driver [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 80f921cb-ec48-41f8-88b0-3ba2a51efd0c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.370 239942 INFO nova.virt.libvirt.driver [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Successfully detached device vdb from instance 80f921cb-ec48-41f8-88b0-3ba2a51efd0c from the live domain config.#033[00m
Jan 30 23:48:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:48:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93445438' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.427 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.433 239942 DEBUG nova.compute.provider_tree [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.454 239942 DEBUG nova.scheduler.client.report [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.483 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.484 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.530 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.531 239942 DEBUG nova.network.neutron [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.584 239942 INFO nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.622 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.736 239942 INFO nova.virt.block_device [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Booting with volume 61ed1114-a50e-49db-a2f2-c2864d17cae4 at /dev/vda#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.741 239942 DEBUG nova.objects.instance [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'flavor' on Instance uuid 80f921cb-ec48-41f8-88b0-3ba2a51efd0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.801 239942 DEBUG oslo_concurrency.lockutils [None req-2eb2b807-33ee-412d-85e2-e0b50e16612f f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:19 np0005603435 nova_compute[239938]: 2026-01-31 04:48:19.906 239942 DEBUG nova.policy [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51ff78d1385146c598709f382eb4bc29', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9c98e89d4ac44c38b41aa3d603a9b0a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.117 239942 DEBUG os_brick.utils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.119 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.131 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.131 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[915e861c-4521-4cbe-bc48-1e95db03417e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.134 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.142 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.142 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0e40db-44b7-42cf-b09e-62288d6ba1eb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.144 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.153 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.153 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bb1d4b-04b8-4513-8a59-b8a5734eb06a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.154 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[cf513ca8-edf7-4508-85f3-4d98086c221a]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.155 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.182 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.185 239942 DEBUG os_brick.initiator.connectors.lightos [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.186 239942 DEBUG os_brick.initiator.connectors.lightos [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.186 239942 DEBUG os_brick.initiator.connectors.lightos [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.187 239942 DEBUG os_brick.utils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.187 239942 DEBUG nova.virt.block_device [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updating existing volume attachment record: 8670a197-c766-4381-b93a-a6fa99dffd48 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.408 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.409 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.409 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.409 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.410 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.412 239942 INFO nova.compute.manager [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Terminating instance#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.413 239942 DEBUG nova.compute.manager [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:48:20 np0005603435 kernel: tap21ab155d-7b (unregistering): left promiscuous mode
Jan 30 23:48:20 np0005603435 NetworkManager[49097]: <info>  [1769834900.4601] device (tap21ab155d-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:48:20 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:20Z|00064|binding|INFO|Releasing lport 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb from this chassis (sb_readonly=0)
Jan 30 23:48:20 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:20Z|00065|binding|INFO|Setting lport 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb down in Southbound
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.470 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:20 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:20Z|00066|binding|INFO|Removing iface tap21ab155d-7b ovn-installed in OVS
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.481 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.484 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:81:a2 10.100.0.8'], port_security=['fa:16:3e:59:81:a2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '80f921cb-ec48-41f8-88b0-3ba2a51efd0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b11aff4b494f4eb1376cfe5754bac8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2702eba-8d5c-40d1-af57-1b1c2fa6cbe3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c4453b0-f040-4fe4-88f1-8a0ec8ff54c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.487 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 21ab155d-7b14-4fa4-b3a0-113a0e6c6abb in datapath 28e37664-8d81-4a45-8e12-f0b45b43b4cf unbound from our chassis#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.491 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28e37664-8d81-4a45-8e12-f0b45b43b4cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.492 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d6db9e9c-c8af-46a0-804d-d1e22fcbb1fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.493 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf namespace which is not needed anymore#033[00m
Jan 30 23:48:20 np0005603435 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 30 23:48:20 np0005603435 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 12.313s CPU time.
Jan 30 23:48:20 np0005603435 systemd-machined[208030]: Machine qemu-6-instance-00000006 terminated.
Jan 30 23:48:20 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[251539]: [NOTICE]   (251570) : haproxy version is 2.8.14-c23fe91
Jan 30 23:48:20 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[251539]: [NOTICE]   (251570) : path to executable is /usr/sbin/haproxy
Jan 30 23:48:20 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[251539]: [WARNING]  (251570) : Exiting Master process...
Jan 30 23:48:20 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[251539]: [ALERT]    (251570) : Current worker (251577) exited with code 143 (Terminated)
Jan 30 23:48:20 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[251539]: [WARNING]  (251570) : All workers exited. Exiting... (0)
Jan 30 23:48:20 np0005603435 systemd[1]: libpod-5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12.scope: Deactivated successfully.
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.655 239942 INFO nova.virt.libvirt.driver [-] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Instance destroyed successfully.#033[00m
Jan 30 23:48:20 np0005603435 podman[252387]: 2026-01-31 04:48:20.655469777 +0000 UTC m=+0.061545572 container died 5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.655 239942 DEBUG nova.objects.instance [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'resources' on Instance uuid 80f921cb-ec48-41f8-88b0-3ba2a51efd0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.674 239942 DEBUG nova.virt.libvirt.vif [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:47:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-763377099',display_name='tempest-VolumesBackupsTest-instance-763377099',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-763377099',id=6,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBONDHwSZ9MkJyo9D2CF/S4KX9O4IxyXttW+K6l+2Zxa4Xv3Vjls90siP2Qj8A8dOzO8uS8EJ2U1JAWq2ETYB11Ins8/2bJogCYXemZjCXUombJMigKOSeOms1DNvDhJevg==',key_name='tempest-keypair-1798745008',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:47:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b8b11aff4b494f4eb1376cfe5754bac8',ramdisk_id='',reservation_id='r-o23rwagz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1503004541',owner_user_name='tempest-VolumesBackupsTest-1503004541-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:47:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f51271330a6d46498b473f0d2595c3ac',uuid=80f921cb-ec48-41f8-88b0-3ba2a51efd0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.675 239942 DEBUG nova.network.os_vif_util [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converting VIF {"id": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "address": "fa:16:3e:59:81:a2", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ab155d-7b", "ovs_interfaceid": "21ab155d-7b14-4fa4-b3a0-113a0e6c6abb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.677 239942 DEBUG nova.network.os_vif_util [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:81:a2,bridge_name='br-int',has_traffic_filtering=True,id=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ab155d-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.678 239942 DEBUG os_vif [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:81:a2,bridge_name='br-int',has_traffic_filtering=True,id=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ab155d-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.680 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.680 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ab155d-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.683 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.686 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.689 239942 INFO os_vif [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:81:a2,bridge_name='br-int',has_traffic_filtering=True,id=21ab155d-7b14-4fa4-b3a0-113a0e6c6abb,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ab155d-7b')#033[00m
Jan 30 23:48:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12-userdata-shm.mount: Deactivated successfully.
Jan 30 23:48:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay-03338233df5ece9ce7025b51c69f5f38fa0cb788aa135d4bc252ee2890b81fe3-merged.mount: Deactivated successfully.
Jan 30 23:48:20 np0005603435 podman[252387]: 2026-01-31 04:48:20.713731219 +0000 UTC m=+0.119807014 container cleanup 5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:48:20 np0005603435 systemd[1]: libpod-conmon-5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12.scope: Deactivated successfully.
Jan 30 23:48:20 np0005603435 podman[252443]: 2026-01-31 04:48:20.797696671 +0000 UTC m=+0.056918211 container remove 5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.801 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6e81230a-406e-47f1-8860-dc6ac987c6bb]: (4, ('Sat Jan 31 04:48:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf (5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12)\n5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12\nSat Jan 31 04:48:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf (5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12)\n5e40916e1e4243d348079b90562739e79937d02c28e022511036df7ac120ad12\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.802 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[701cdfa3-6d06-4556-901f-f00026419939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.803 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28e37664-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:20 np0005603435 kernel: tap28e37664-80: left promiscuous mode
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.805 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.811 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[40d95eb0-084b-4610-9bfe-26599c5346c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.812 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.827 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[72d4b8a6-91db-4b48-a9b9-e2257c23e434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.830 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d58657a4-2040-4d57-9ca6-f2b5d9f970f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.843 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccd3656-a56d-4a06-9b5a-603629e4ebc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392107, 'reachable_time': 28850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252461, 'error': None, 'target': 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.845 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:48:20 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:20.845 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[d586a4d7-77bb-4613-acac-6015ae2297ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:20 np0005603435 systemd[1]: run-netns-ovnmeta\x2d28e37664\x2d8d81\x2d4a45\x2d8e12\x2df0b45b43b4cf.mount: Deactivated successfully.
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.898 239942 DEBUG nova.compute.manager [req-53a2690e-9425-4d0b-9fdd-4473d709d46e req-0a6ea273-cc93-4129-8293-a9a46127c00a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received event network-vif-unplugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.899 239942 DEBUG oslo_concurrency.lockutils [req-53a2690e-9425-4d0b-9fdd-4473d709d46e req-0a6ea273-cc93-4129-8293-a9a46127c00a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.899 239942 DEBUG oslo_concurrency.lockutils [req-53a2690e-9425-4d0b-9fdd-4473d709d46e req-0a6ea273-cc93-4129-8293-a9a46127c00a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.900 239942 DEBUG oslo_concurrency.lockutils [req-53a2690e-9425-4d0b-9fdd-4473d709d46e req-0a6ea273-cc93-4129-8293-a9a46127c00a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.900 239942 DEBUG nova.compute.manager [req-53a2690e-9425-4d0b-9fdd-4473d709d46e req-0a6ea273-cc93-4129-8293-a9a46127c00a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] No waiting events found dispatching network-vif-unplugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.901 239942 DEBUG nova.compute.manager [req-53a2690e-9425-4d0b-9fdd-4473d709d46e req-0a6ea273-cc93-4129-8293-a9a46127c00a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received event network-vif-unplugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:48:20 np0005603435 nova_compute[239938]: 2026-01-31 04:48:20.923 239942 DEBUG nova.network.neutron [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Successfully created port: 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.006 239942 INFO nova.virt.libvirt.driver [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Deleting instance files /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c_del#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.007 239942 INFO nova.virt.libvirt.driver [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Deletion of /var/lib/nova/instances/80f921cb-ec48-41f8-88b0-3ba2a51efd0c_del complete#033[00m
Jan 30 23:48:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2523291830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.093 239942 INFO nova.compute.manager [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.094 239942 DEBUG oslo.service.loopingcall [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.094 239942 DEBUG nova.compute.manager [-] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.095 239942 DEBUG nova.network.neutron [-] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:48:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 339 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 642 KiB/s wr, 75 op/s
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.384 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.386 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.387 239942 INFO nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Creating image(s)#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.387 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.388 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Ensure instance console log exists: /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.388 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.389 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:21 np0005603435 nova_compute[239938]: 2026-01-31 04:48:21.389 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.071 239942 DEBUG nova.network.neutron [-] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.113 239942 INFO nova.compute.manager [-] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Took 1.02 seconds to deallocate network for instance.#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.196 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.197 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.292 239942 DEBUG oslo_concurrency.processutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.327 239942 DEBUG nova.compute.manager [req-4b60b607-2701-482e-939b-43f63afa003a req-11d9e2eb-ec90-48b7-9d02-890183467c0f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received event network-vif-deleted-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.503 239942 DEBUG nova.network.neutron [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Successfully updated port: 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.525 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquiring lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.526 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquired lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.526 239942 DEBUG nova.network.neutron [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:48:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:48:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307706829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.859 239942 DEBUG oslo_concurrency.processutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.867 239942 DEBUG nova.compute.provider_tree [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.883 239942 DEBUG nova.scheduler.client.report [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.908 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.958 239942 INFO nova.scheduler.client.report [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Deleted allocations for instance 80f921cb-ec48-41f8-88b0-3ba2a51efd0c#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.976 239942 DEBUG nova.network.neutron [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.992 239942 DEBUG oslo_concurrency.lockutils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:22 np0005603435 nova_compute[239938]: 2026-01-31 04:48:22.993 239942 DEBUG oslo_concurrency.lockutils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.028 239942 DEBUG nova.objects.instance [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lazy-loading 'flavor' on Instance uuid 2de06a6e-707c-434b-980d-ab52c01abb9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.034 239942 DEBUG oslo_concurrency.lockutils [None req-ee097a87-8afa-4be9-9074-7a186067337c f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.057 239942 INFO nova.virt.libvirt.driver [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Ignoring supplied device name: /dev/vdb#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.074 239942 DEBUG oslo_concurrency.lockutils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.233 239942 DEBUG nova.compute.manager [req-7e5a5e65-1c50-4316-9e40-4315544d17ac req-190754e6-77c8-49e9-999d-67ec6aea809d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received event network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.233 239942 DEBUG oslo_concurrency.lockutils [req-7e5a5e65-1c50-4316-9e40-4315544d17ac req-190754e6-77c8-49e9-999d-67ec6aea809d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.234 239942 DEBUG oslo_concurrency.lockutils [req-7e5a5e65-1c50-4316-9e40-4315544d17ac req-190754e6-77c8-49e9-999d-67ec6aea809d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.235 239942 DEBUG oslo_concurrency.lockutils [req-7e5a5e65-1c50-4316-9e40-4315544d17ac req-190754e6-77c8-49e9-999d-67ec6aea809d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "80f921cb-ec48-41f8-88b0-3ba2a51efd0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.235 239942 DEBUG nova.compute.manager [req-7e5a5e65-1c50-4316-9e40-4315544d17ac req-190754e6-77c8-49e9-999d-67ec6aea809d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] No waiting events found dispatching network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.236 239942 WARNING nova.compute.manager [req-7e5a5e65-1c50-4316-9e40-4315544d17ac req-190754e6-77c8-49e9-999d-67ec6aea809d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Received unexpected event network-vif-plugged-21ab155d-7b14-4fa4-b3a0-113a0e6c6abb for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:48:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 278 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 571 KiB/s wr, 109 op/s
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.253 239942 DEBUG oslo_concurrency.lockutils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.254 239942 DEBUG oslo_concurrency.lockutils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.254 239942 INFO nova.compute.manager [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Attaching volume 786c90f8-33a2-4d7e-a564-220dd06f70ae to /dev/vdb#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.363 239942 DEBUG os_brick.utils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.365 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.376 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.377 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a7010b-5d91-437a-91b0-0b8a04e5932e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.378 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.385 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.386 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0e7dbe-58b6-47dc-9c3d-1eab78e72bb0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.388 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.397 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.397 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[6aae5e0c-2c24-4a81-a88a-52332133630c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.399 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0a908a-3fd8-443a-97a2-96b8880633f9]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.400 239942 DEBUG oslo_concurrency.processutils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.419 239942 DEBUG oslo_concurrency.processutils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.422 239942 DEBUG os_brick.initiator.connectors.lightos [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.422 239942 DEBUG os_brick.initiator.connectors.lightos [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.423 239942 DEBUG os_brick.initiator.connectors.lightos [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.424 239942 DEBUG os_brick.utils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.424 239942 DEBUG nova.virt.block_device [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updating existing volume attachment record: 3dba58b4-505c-4f78-8bb9-28d7e8b39aa3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.632 239942 DEBUG nova.network.neutron [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updating instance_info_cache with network_info: [{"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.711 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Releasing lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.712 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Instance network_info: |[{"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.719 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Start _get_guest_xml network_info=[{"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '8670a197-c766-4381-b93a-a6fa99dffd48', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-61ed1114-a50e-49db-a2f2-c2864d17cae4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '61ed1114-a50e-49db-a2f2-c2864d17cae4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '62dcf699-1417-4b1e-b107-3527e61c68a8', 'attached_at': '', 'detached_at': '', 'volume_id': '61ed1114-a50e-49db-a2f2-c2864d17cae4', 'serial': '61ed1114-a50e-49db-a2f2-c2864d17cae4'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.726 239942 WARNING nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.731 239942 DEBUG nova.virt.libvirt.host [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.732 239942 DEBUG nova.virt.libvirt.host [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.735 239942 DEBUG nova.virt.libvirt.host [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.736 239942 DEBUG nova.virt.libvirt.host [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.736 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.737 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.738 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.738 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.738 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.739 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.739 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.739 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.740 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.740 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.740 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.741 239942 DEBUG nova.virt.hardware [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.776 239942 DEBUG nova.storage.rbd_utils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] rbd image 62dcf699-1417-4b1e-b107-3527e61c68a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:48:23 np0005603435 nova_compute[239938]: 2026-01-31 04:48:23.780 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2864106359' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.189 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.248 239942 DEBUG nova.objects.instance [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lazy-loading 'flavor' on Instance uuid 2de06a6e-707c-434b-980d-ab52c01abb9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.281 239942 DEBUG nova.virt.libvirt.driver [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Attempting to attach volume 786c90f8-33a2-4d7e-a564-220dd06f70ae with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.285 239942 DEBUG nova.virt.libvirt.guest [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-786c90f8-33a2-4d7e-a564-220dd06f70ae">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <serial>786c90f8-33a2-4d7e-a564-220dd06f70ae</serial>
Jan 30 23:48:24 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:48:24 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:48:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229102991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.328 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.356 239942 DEBUG nova.virt.libvirt.vif [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:48:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-803424544',display_name='tempest-TestVolumeBackupRestore-server-803424544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-803424544',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCP1m7eJMGiS8harXOXi6bVep4rPBK/7p7pgc2N2rfY7Yh91jUe7m0NHPNsM5XRn6r1ZxrhSUckERbS/1BFLnjE+Mjher/8KbGtg/8DwssuxOIEaVMVMFX1Pkwd5lI8s6g==',key_name='tempest-TestVolumeBackupRestore-2078203097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9c98e89d4ac44c38b41aa3d603a9b0a',ramdisk_id='',reservation_id='r-ze27pc0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1693640160',owner_user_name='tempest-TestVolumeBackupRestore-1693640160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:48:19Z,user_data=None,user_id='51ff78d1385146c598709f382eb4bc29',uuid=62dcf699-1417-4b1e-b107-3527e61c68a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.356 239942 DEBUG nova.network.os_vif_util [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Converting VIF {"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.358 239942 DEBUG nova.network.os_vif_util [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:08:f8,bridge_name='br-int',has_traffic_filtering=True,id=1d62e775-3c70-46e5-a96d-3caf6e7cfc53,network=Network(25a68b42-b744-40ad-b5c6-c5e70764e097),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d62e775-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.360 239942 DEBUG nova.objects.instance [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lazy-loading 'pci_devices' on Instance uuid 62dcf699-1417-4b1e-b107-3527e61c68a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.384 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <uuid>62dcf699-1417-4b1e-b107-3527e61c68a8</uuid>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <name>instance-00000007</name>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestVolumeBackupRestore-server-803424544</nova:name>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:48:23</nova:creationTime>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <nova:user uuid="51ff78d1385146c598709f382eb4bc29">tempest-TestVolumeBackupRestore-1693640160-project-member</nova:user>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <nova:project uuid="b9c98e89d4ac44c38b41aa3d603a9b0a">tempest-TestVolumeBackupRestore-1693640160</nova:project>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <nova:port uuid="1d62e775-3c70-46e5-a96d-3caf6e7cfc53">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <entry name="serial">62dcf699-1417-4b1e-b107-3527e61c68a8</entry>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <entry name="uuid">62dcf699-1417-4b1e-b107-3527e61c68a8</entry>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/62dcf699-1417-4b1e-b107-3527e61c68a8_disk.config">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-61ed1114-a50e-49db-a2f2-c2864d17cae4">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <serial>61ed1114-a50e-49db-a2f2-c2864d17cae4</serial>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:2b:08:f8"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <target dev="tap1d62e775-3c"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8/console.log" append="off"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:48:24 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:48:24 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:48:24 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:48:24 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.385 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Preparing to wait for external event network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.385 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquiring lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.388 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.388 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.389 239942 DEBUG nova.virt.libvirt.vif [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:48:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-803424544',display_name='tempest-TestVolumeBackupRestore-server-803424544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-803424544',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCP1m7eJMGiS8harXOXi6bVep4rPBK/7p7pgc2N2rfY7Yh91jUe7m0NHPNsM5XRn6r1ZxrhSUckERbS/1BFLnjE+Mjher/8KbGtg/8DwssuxOIEaVMVMFX1Pkwd5lI8s6g==',key_name='tempest-TestVolumeBackupRestore-2078203097',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9c98e89d4ac44c38b41aa3d603a9b0a',ramdisk_id='',reservation_id='r-ze27pc0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1693640160',owner_user_name='tempest-TestVolumeBackupRestore-1693640160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:48:19Z,user_data=None,user_id='51ff78d1385146c598709f382eb4bc29',uuid=62dcf699-1417-4b1e-b107-3527e61c68a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.389 239942 DEBUG nova.network.os_vif_util [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Converting VIF {"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.390 239942 DEBUG nova.network.os_vif_util [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:08:f8,bridge_name='br-int',has_traffic_filtering=True,id=1d62e775-3c70-46e5-a96d-3caf6e7cfc53,network=Network(25a68b42-b744-40ad-b5c6-c5e70764e097),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d62e775-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.390 239942 DEBUG os_vif [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:08:f8,bridge_name='br-int',has_traffic_filtering=True,id=1d62e775-3c70-46e5-a96d-3caf6e7cfc53,network=Network(25a68b42-b744-40ad-b5c6-c5e70764e097),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d62e775-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.392 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.392 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.392 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.396 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.396 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d62e775-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.397 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d62e775-3c, col_values=(('external_ids', {'iface-id': '1d62e775-3c70-46e5-a96d-3caf6e7cfc53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:08:f8', 'vm-uuid': '62dcf699-1417-4b1e-b107-3527e61c68a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.417 239942 DEBUG nova.virt.libvirt.driver [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.417 239942 DEBUG nova.virt.libvirt.driver [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.418 239942 DEBUG nova.virt.libvirt.driver [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.418 239942 DEBUG nova.virt.libvirt.driver [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] No VIF found with MAC fa:16:3e:74:66:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:48:24 np0005603435 NetworkManager[49097]: <info>  [1769834904.4534] manager: (tap1d62e775-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.452 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.455 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.460 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.462 239942 INFO os_vif [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:08:f8,bridge_name='br-int',has_traffic_filtering=True,id=1d62e775-3c70-46e5-a96d-3caf6e7cfc53,network=Network(25a68b42-b744-40ad-b5c6-c5e70764e097),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d62e775-3c')#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.474 239942 DEBUG nova.compute.manager [req-1a50ee56-8c06-4cfc-ad5c-fd75e1034929 req-42826ce1-34d0-4dba-90ae-9fb76bf057de c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.474 239942 DEBUG nova.compute.manager [req-1a50ee56-8c06-4cfc-ad5c-fd75e1034929 req-42826ce1-34d0-4dba-90ae-9fb76bf057de c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing instance network info cache due to event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.474 239942 DEBUG oslo_concurrency.lockutils [req-1a50ee56-8c06-4cfc-ad5c-fd75e1034929 req-42826ce1-34d0-4dba-90ae-9fb76bf057de c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.475 239942 DEBUG oslo_concurrency.lockutils [req-1a50ee56-8c06-4cfc-ad5c-fd75e1034929 req-42826ce1-34d0-4dba-90ae-9fb76bf057de c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.475 239942 DEBUG nova.network.neutron [req-1a50ee56-8c06-4cfc-ad5c-fd75e1034929 req-42826ce1-34d0-4dba-90ae-9fb76bf057de c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.515 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.516 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.516 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] No VIF found with MAC fa:16:3e:2b:08:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.517 239942 INFO nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Using config drive#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.547 239942 DEBUG nova.storage.rbd_utils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] rbd image 62dcf699-1417-4b1e-b107-3527e61c68a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:48:24 np0005603435 nova_compute[239938]: 2026-01-31 04:48:24.614 239942 DEBUG oslo_concurrency.lockutils [None req-489769ef-c8af-4c22-a153-31ece7c975d1 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.171 239942 INFO nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Creating config drive at /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8/disk.config#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.176 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprqwfh5mz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 260 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.4 KiB/s wr, 79 op/s
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.303 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprqwfh5mz" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.336 239942 DEBUG nova.storage.rbd_utils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] rbd image 62dcf699-1417-4b1e-b107-3527e61c68a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.340 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8/disk.config 62dcf699-1417-4b1e-b107-3527e61c68a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.483 239942 DEBUG oslo_concurrency.processutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8/disk.config 62dcf699-1417-4b1e-b107-3527e61c68a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.484 239942 INFO nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Deleting local config drive /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8/disk.config because it was imported into RBD.#033[00m
Jan 30 23:48:25 np0005603435 kernel: tap1d62e775-3c: entered promiscuous mode
Jan 30 23:48:25 np0005603435 NetworkManager[49097]: <info>  [1769834905.5391] manager: (tap1d62e775-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 30 23:48:25 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:25Z|00067|binding|INFO|Claiming lport 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 for this chassis.
Jan 30 23:48:25 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:25Z|00068|binding|INFO|1d62e775-3c70-46e5-a96d-3caf6e7cfc53: Claiming fa:16:3e:2b:08:f8 10.100.0.7
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.583 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.595 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:25 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:25Z|00069|binding|INFO|Setting lport 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 ovn-installed in OVS
Jan 30 23:48:25 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:25Z|00070|binding|INFO|Setting lport 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 up in Southbound
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.595 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:f8 10.100.0.7'], port_security=['fa:16:3e:2b:08:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '62dcf699-1417-4b1e-b107-3527e61c68a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25a68b42-b744-40ad-b5c6-c5e70764e097', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9c98e89d4ac44c38b41aa3d603a9b0a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c800788-ad95-4357-9798-0a13317252e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a2b0316-5cca-492a-8f5a-003ff4fc2b30, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=1d62e775-3c70-46e5-a96d-3caf6e7cfc53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.597 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.600 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 in datapath 25a68b42-b744-40ad-b5c6-c5e70764e097 bound to our chassis#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.606 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25a68b42-b744-40ad-b5c6-c5e70764e097#033[00m
Jan 30 23:48:25 np0005603435 systemd-machined[208030]: New machine qemu-7-instance-00000007.
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.617 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[43a1d99b-027e-44d3-83ca-b7bffee365aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.618 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap25a68b42-b1 in ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.621 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap25a68b42-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.621 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4d893b-88c3-43df-8356-e9aa672db361]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.622 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b05e3d87-65b0-466a-b567-3ee8c359e413]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.634 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[5a97c286-c0d6-418a-ba46-e856299d541d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 systemd-udevd[252629]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.653 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bae04e59-1896-4f14-a105-19f8b043e169]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 NetworkManager[49097]: <info>  [1769834905.6634] device (tap1d62e775-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:48:25 np0005603435 NetworkManager[49097]: <info>  [1769834905.6652] device (tap1d62e775-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.672 239942 DEBUG nova.network.neutron [req-1a50ee56-8c06-4cfc-ad5c-fd75e1034929 req-42826ce1-34d0-4dba-90ae-9fb76bf057de c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updated VIF entry in instance network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.673 239942 DEBUG nova.network.neutron [req-1a50ee56-8c06-4cfc-ad5c-fd75e1034929 req-42826ce1-34d0-4dba-90ae-9fb76bf057de c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updating instance_info_cache with network_info: [{"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.685 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[09128989-7126-4935-b1ab-85fd994907d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 NetworkManager[49097]: <info>  [1769834905.6941] manager: (tap25a68b42-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.692 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c8114c-9532-4609-99ba-181ec5415b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.694 239942 DEBUG oslo_concurrency.lockutils [req-1a50ee56-8c06-4cfc-ad5c-fd75e1034929 req-42826ce1-34d0-4dba-90ae-9fb76bf057de c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.740 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[64647545-5837-4439-89f3-d734330299d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.743 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a59121-040d-4b06-b560-0ddeb8ca332f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 NetworkManager[49097]: <info>  [1769834905.7656] device (tap25a68b42-b0): carrier: link connected
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.769 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[0399dd66-46d4-4ecb-b881-0d5391d79d1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.784 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[47388b4f-4916-4cab-b506-38e7be4edf84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25a68b42-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:83:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396016, 'reachable_time': 18094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252659, 'error': None, 'target': 'ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.799 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd53a9b-086b-4121-a0e5-501efdb82ae7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:830d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396016, 'tstamp': 396016}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252660, 'error': None, 'target': 'ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.815 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e82e3d-969f-4376-ab2a-0522a2a27792]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25a68b42-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:83:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396016, 'reachable_time': 18094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252661, 'error': None, 'target': 'ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.859 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5702ee-8044-4f02-aceb-615280e035cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.919 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5aaf62-3e31-408f-bd50-73dd9d1feafc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.921 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25a68b42-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.921 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.922 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25a68b42-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:25 np0005603435 kernel: tap25a68b42-b0: entered promiscuous mode
Jan 30 23:48:25 np0005603435 NetworkManager[49097]: <info>  [1769834905.9274] manager: (tap25a68b42-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.929 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25a68b42-b0, col_values=(('external_ids', {'iface-id': '6f14bcd8-9cab-43f6-9bdc-0bd7e0c87151'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:25 np0005603435 nova_compute[239938]: 2026-01-31 04:48:25.930 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:25 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:25Z|00071|binding|INFO|Releasing lport 6f14bcd8-9cab-43f6-9bdc-0bd7e0c87151 from this chassis (sb_readonly=0)
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.937 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/25a68b42-b744-40ad-b5c6-c5e70764e097.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/25a68b42-b744-40ad-b5c6-c5e70764e097.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.938 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[16043c67-da69-4a02-ade6-ab52cfa00350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.939 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-25a68b42-b744-40ad-b5c6-c5e70764e097
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/25a68b42-b744-40ad-b5c6-c5e70764e097.pid.haproxy
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 25a68b42-b744-40ad-b5c6-c5e70764e097
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:48:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:25.940 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097', 'env', 'PROCESS_TAG=haproxy-25a68b42-b744-40ad-b5c6-c5e70764e097', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/25a68b42-b744-40ad-b5c6-c5e70764e097.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:48:26 np0005603435 podman[252734]: 2026-01-31 04:48:26.308972324 +0000 UTC m=+0.046227018 container create 2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:48:26 np0005603435 systemd[1]: Started libpod-conmon-2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159.scope.
Jan 30 23:48:26 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.357 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834906.356329, 62dcf699-1417-4b1e-b107-3527e61c68a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.358 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] VM Started (Lifecycle Event)#033[00m
Jan 30 23:48:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea43235b57ac61ed3d295827540852601339c85f7e098230dfb52d20e58933d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.364 239942 DEBUG nova.compute.manager [req-5a58627b-7610-4823-be1f-47a8f1538fc2 req-119a2d44-14e3-4824-b1a6-c88e0585cf9e 64ab0d5582884c689a288d7e963a655d 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event volume-extended-786c90f8-33a2-4d7e-a564-220dd06f70ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.378 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:48:26 np0005603435 podman[252734]: 2026-01-31 04:48:26.3796156 +0000 UTC m=+0.116870364 container init 2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.381 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834906.356456, 62dcf699-1417-4b1e-b107-3527e61c68a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.381 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:48:26 np0005603435 podman[252734]: 2026-01-31 04:48:26.286438089 +0000 UTC m=+0.023692793 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 30 23:48:26 np0005603435 podman[252734]: 2026-01-31 04:48:26.383858181 +0000 UTC m=+0.121112905 container start 2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.384 239942 DEBUG nova.compute.manager [req-5a58627b-7610-4823-be1f-47a8f1538fc2 req-119a2d44-14e3-4824-b1a6-c88e0585cf9e 64ab0d5582884c689a288d7e963a655d 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Handling volume-extended event for volume 786c90f8-33a2-4d7e-a564-220dd06f70ae extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.399 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.400 239942 INFO nova.compute.manager [req-5a58627b-7610-4823-be1f-47a8f1538fc2 req-119a2d44-14e3-4824-b1a6-c88e0585cf9e 64ab0d5582884c689a288d7e963a655d 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Cinder extended volume 786c90f8-33a2-4d7e-a564-220dd06f70ae; extending it to detect new size#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.409 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:48:26 np0005603435 neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097[252751]: [NOTICE]   (252755) : New worker (252757) forked
Jan 30 23:48:26 np0005603435 neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097[252751]: [NOTICE]   (252755) : Loading success.
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.428 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.541 239942 DEBUG nova.compute.manager [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.542 239942 DEBUG oslo_concurrency.lockutils [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.542 239942 DEBUG oslo_concurrency.lockutils [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.542 239942 DEBUG oslo_concurrency.lockutils [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.543 239942 DEBUG nova.compute.manager [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Processing event network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.543 239942 DEBUG nova.compute.manager [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.543 239942 DEBUG oslo_concurrency.lockutils [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.543 239942 DEBUG oslo_concurrency.lockutils [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.544 239942 DEBUG oslo_concurrency.lockutils [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.544 239942 DEBUG nova.compute.manager [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] No waiting events found dispatching network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.544 239942 WARNING nova.compute.manager [req-370d4922-db45-4071-b5f8-ab8f52dd4359 req-a5005825-0970-49c5-b10f-ec624dec2e63 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received unexpected event network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 for instance with vm_state building and task_state spawning.#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.545 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.551 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834906.550123, 62dcf699-1417-4b1e-b107-3527e61c68a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.552 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.555 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.563 239942 DEBUG nova.virt.libvirt.driver [req-5a58627b-7610-4823-be1f-47a8f1538fc2 req-119a2d44-14e3-4824-b1a6-c88e0585cf9e 64ab0d5582884c689a288d7e963a655d 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.565 239942 INFO nova.virt.libvirt.driver [-] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Instance spawned successfully.#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.566 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.577 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.583 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.598 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.599 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.600 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.601 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.602 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.603 239942 DEBUG nova.virt.libvirt.driver [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.611 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.657 239942 INFO nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Took 5.27 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.657 239942 DEBUG nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/24602174' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/24602174' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.721 239942 INFO nova.compute.manager [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Took 8.01 seconds to build instance.#033[00m
Jan 30 23:48:26 np0005603435 nova_compute[239938]: 2026-01-31 04:48:26.749 239942 DEBUG oslo_concurrency.lockutils [None req-93071f7c-9134-46ee-a125-8b519862e920 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 260 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 4.7 KiB/s wr, 68 op/s
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.498 239942 DEBUG oslo_concurrency.lockutils [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.499 239942 DEBUG oslo_concurrency.lockutils [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.518 239942 INFO nova.compute.manager [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Detaching volume 786c90f8-33a2-4d7e-a564-220dd06f70ae#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.704 239942 INFO nova.virt.block_device [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Attempting to driver detach volume 786c90f8-33a2-4d7e-a564-220dd06f70ae from mountpoint /dev/vdb#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.710 239942 DEBUG nova.virt.libvirt.driver [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Attempting to detach device vdb from instance 2de06a6e-707c-434b-980d-ab52c01abb9e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.710 239942 DEBUG nova.virt.libvirt.guest [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-786c90f8-33a2-4d7e-a564-220dd06f70ae">
Jan 30 23:48:27 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <serial>786c90f8-33a2-4d7e-a564-220dd06f70ae</serial>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:48:27 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:48:27 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.719 239942 INFO nova.virt.libvirt.driver [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Successfully detached device vdb from instance 2de06a6e-707c-434b-980d-ab52c01abb9e from the persistent domain config.#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.719 239942 DEBUG nova.virt.libvirt.driver [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2de06a6e-707c-434b-980d-ab52c01abb9e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.720 239942 DEBUG nova.virt.libvirt.guest [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-786c90f8-33a2-4d7e-a564-220dd06f70ae">
Jan 30 23:48:27 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <serial>786c90f8-33a2-4d7e-a564-220dd06f70ae</serial>
Jan 30 23:48:27 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:48:27 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:48:27 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.827 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769834907.8267858, 2de06a6e-707c-434b-980d-ab52c01abb9e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.828 239942 DEBUG nova.virt.libvirt.driver [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2de06a6e-707c-434b-980d-ab52c01abb9e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:48:27 np0005603435 nova_compute[239938]: 2026-01-31 04:48:27.831 239942 INFO nova.virt.libvirt.driver [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Successfully detached device vdb from instance 2de06a6e-707c-434b-980d-ab52c01abb9e from the live domain config.#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.024 239942 DEBUG nova.objects.instance [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lazy-loading 'flavor' on Instance uuid 2de06a6e-707c-434b-980d-ab52c01abb9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.059 239942 DEBUG oslo_concurrency.lockutils [None req-514eed58-3d1a-46df-a80a-7f3a81af741c 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.791 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.791 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.792 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.792 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.792 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.794 239942 INFO nova.compute.manager [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Terminating instance#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.795 239942 DEBUG nova.compute.manager [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:48:28 np0005603435 kernel: tapf1498a6d-42 (unregistering): left promiscuous mode
Jan 30 23:48:28 np0005603435 NetworkManager[49097]: <info>  [1769834908.8464] device (tapf1498a6d-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:48:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:28Z|00072|binding|INFO|Releasing lport f1498a6d-42eb-444b-9b53-825529f5cb1c from this chassis (sb_readonly=0)
Jan 30 23:48:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:28Z|00073|binding|INFO|Setting lport f1498a6d-42eb-444b-9b53-825529f5cb1c down in Southbound
Jan 30 23:48:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:28Z|00074|binding|INFO|Removing iface tapf1498a6d-42 ovn-installed in OVS
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.867 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.868 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:28 np0005603435 nova_compute[239938]: 2026-01-31 04:48:28.881 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:28 np0005603435 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 30 23:48:28 np0005603435 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 13.047s CPU time.
Jan 30 23:48:28 np0005603435 systemd-machined[208030]: Machine qemu-5-instance-00000005 terminated.
Jan 30 23:48:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:28.924 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:66:d6 10.100.0.5'], port_security=['fa:16:3e:74:66:d6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2de06a6e-707c-434b-980d-ab52c01abb9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2bb69332e8af48ee847370d546eaee1e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd1f874e-55a9-4680-a797-e091d433d6bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ae6db6-c1c3-4fcb-b05f-8f86ed2cfe9a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=f1498a6d-42eb-444b-9b53-825529f5cb1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:48:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:28.925 156017 INFO neutron.agent.ovn.metadata.agent [-] Port f1498a6d-42eb-444b-9b53-825529f5cb1c in datapath 5c3579c7-dc9d-4cf7-9e43-1aa98a65254a unbound from our chassis#033[00m
Jan 30 23:48:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:28.926 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c3579c7-dc9d-4cf7-9e43-1aa98a65254a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:48:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:28.927 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[15ad2d0f-96e9-4b46-8b18-20d011c3b19c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:28.928 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a namespace which is not needed anymore#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.025 239942 INFO nova.virt.libvirt.driver [-] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Instance destroyed successfully.#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.026 239942 DEBUG nova.objects.instance [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lazy-loading 'resources' on Instance uuid 2de06a6e-707c-434b-980d-ab52c01abb9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.042 239942 DEBUG nova.virt.libvirt.vif [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:47:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1430074972',display_name='tempest-VolumesExtendAttachedTest-instance-1430074972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1430074972',id=5,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzPN2P3X8OOSzjbiS4D0CkZSzKSGgVBUZMk1xvOhsc7ycfoOzirzhWNOLqmqsMOlSnX/agcppGzCjsfDa+iMVhnTYHmcD/fg7WgCyqoyG/ORaEQfSpvUjcfbgpTfiszng==',key_name='tempest-keypair-1954170071',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:47:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2bb69332e8af48ee847370d546eaee1e',ramdisk_id='',reservation_id='r-g3u4u6f0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-212133215',owner_user_name='tempest-VolumesExtendAttachedTest-212133215-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:47:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b66a987b14d4c37aedbb2fe48fd1547',uuid=2de06a6e-707c-434b-980d-ab52c01abb9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.043 239942 DEBUG nova.network.os_vif_util [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Converting VIF {"id": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "address": "fa:16:3e:74:66:d6", "network": {"id": "5c3579c7-dc9d-4cf7-9e43-1aa98a65254a", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-253417945-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2bb69332e8af48ee847370d546eaee1e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1498a6d-42", "ovs_interfaceid": "f1498a6d-42eb-444b-9b53-825529f5cb1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.045 239942 DEBUG nova.network.os_vif_util [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:66:d6,bridge_name='br-int',has_traffic_filtering=True,id=f1498a6d-42eb-444b-9b53-825529f5cb1c,network=Network(5c3579c7-dc9d-4cf7-9e43-1aa98a65254a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1498a6d-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.046 239942 DEBUG os_vif [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:66:d6,bridge_name='br-int',has_traffic_filtering=True,id=f1498a6d-42eb-444b-9b53-825529f5cb1c,network=Network(5c3579c7-dc9d-4cf7-9e43-1aa98a65254a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1498a6d-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.049 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.050 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1498a6d-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.053 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.055 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.058 239942 INFO os_vif [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:66:d6,bridge_name='br-int',has_traffic_filtering=True,id=f1498a6d-42eb-444b-9b53-825529f5cb1c,network=Network(5c3579c7-dc9d-4cf7-9e43-1aa98a65254a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1498a6d-42')#033[00m
Jan 30 23:48:29 np0005603435 neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a[251455]: [NOTICE]   (251460) : haproxy version is 2.8.14-c23fe91
Jan 30 23:48:29 np0005603435 neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a[251455]: [NOTICE]   (251460) : path to executable is /usr/sbin/haproxy
Jan 30 23:48:29 np0005603435 neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a[251455]: [WARNING]  (251460) : Exiting Master process...
Jan 30 23:48:29 np0005603435 neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a[251455]: [ALERT]    (251460) : Current worker (251462) exited with code 143 (Terminated)
Jan 30 23:48:29 np0005603435 neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a[251455]: [WARNING]  (251460) : All workers exited. Exiting... (0)
Jan 30 23:48:29 np0005603435 systemd[1]: libpod-ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106.scope: Deactivated successfully.
Jan 30 23:48:29 np0005603435 podman[252798]: 2026-01-31 04:48:29.087715058 +0000 UTC m=+0.059063683 container died ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:48:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106-userdata-shm.mount: Deactivated successfully.
Jan 30 23:48:29 np0005603435 systemd[1]: var-lib-containers-storage-overlay-45fb8a6fae0118f9ef751265f619bb545d72febd9c22cbc9f8fd0e244ebce819-merged.mount: Deactivated successfully.
Jan 30 23:48:29 np0005603435 podman[252798]: 2026-01-31 04:48:29.134449057 +0000 UTC m=+0.105797642 container cleanup ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 30 23:48:29 np0005603435 systemd[1]: libpod-conmon-ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106.scope: Deactivated successfully.
Jan 30 23:48:29 np0005603435 podman[252850]: 2026-01-31 04:48:29.198815524 +0000 UTC m=+0.045573602 container remove ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.241 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.244 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a4416fb7-67e9-449b-bee9-fe983af988ad]: (4, ('Sat Jan 31 04:48:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a (ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106)\nba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106\nSat Jan 31 04:48:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a (ba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106)\nba3025b8f2548fa7319ac8a0963a7f8fbf42318bf2c81c8d431fe3a6c1fe2106\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 260 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 723 KiB/s rd, 23 KiB/s wr, 120 op/s
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.246 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2446523b-8093-44d6-8763-c181d38e1afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.247 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c3579c7-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:29 np0005603435 kernel: tap5c3579c7-d0: left promiscuous mode
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.251 239942 DEBUG nova.compute.manager [req-2d87f764-f18d-4238-b7d0-c14ab574327c req-49fb640b-b4db-49fd-a0f4-53d0c9864519 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event network-vif-unplugged-f1498a6d-42eb-444b-9b53-825529f5cb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.253 239942 DEBUG oslo_concurrency.lockutils [req-2d87f764-f18d-4238-b7d0-c14ab574327c req-49fb640b-b4db-49fd-a0f4-53d0c9864519 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.253 239942 DEBUG oslo_concurrency.lockutils [req-2d87f764-f18d-4238-b7d0-c14ab574327c req-49fb640b-b4db-49fd-a0f4-53d0c9864519 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.254 239942 DEBUG oslo_concurrency.lockutils [req-2d87f764-f18d-4238-b7d0-c14ab574327c req-49fb640b-b4db-49fd-a0f4-53d0c9864519 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.254 239942 DEBUG nova.compute.manager [req-2d87f764-f18d-4238-b7d0-c14ab574327c req-49fb640b-b4db-49fd-a0f4-53d0c9864519 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] No waiting events found dispatching network-vif-unplugged-f1498a6d-42eb-444b-9b53-825529f5cb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.255 239942 DEBUG nova.compute.manager [req-2d87f764-f18d-4238-b7d0-c14ab574327c req-49fb640b-b4db-49fd-a0f4-53d0c9864519 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event network-vif-unplugged-f1498a6d-42eb-444b-9b53-825529f5cb1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.255 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.256 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cc75ca43-aa03-4e84-8712-97ef4b4874b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.268 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d152152c-90ef-46c9-9f81-23ebc0c7ce09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.269 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8494192d-8bb7-445f-b736-2213db34ea5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.280 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[43d02998-4270-499a-82bf-07bcfeb7011b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 392015, 'reachable_time': 26294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252866, 'error': None, 'target': 'ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:29 np0005603435 systemd[1]: run-netns-ovnmeta\x2d5c3579c7\x2ddc9d\x2d4cf7\x2d9e43\x2d1aa98a65254a.mount: Deactivated successfully.
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.282 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c3579c7-dc9d-4cf7-9e43-1aa98a65254a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:48:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:29.282 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[700f94ce-96dd-4995-adb5-3ab6aadd6b51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.463 239942 INFO nova.virt.libvirt.driver [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Deleting instance files /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e_del#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.463 239942 INFO nova.virt.libvirt.driver [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Deletion of /var/lib/nova/instances/2de06a6e-707c-434b-980d-ab52c01abb9e_del complete#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.517 239942 INFO nova.compute.manager [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.517 239942 DEBUG oslo.service.loopingcall [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.518 239942 DEBUG nova.compute.manager [-] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:48:29 np0005603435 nova_compute[239938]: 2026-01-31 04:48:29.518 239942 DEBUG nova.network.neutron [-] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:48:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 242 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 115 op/s
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.260 239942 DEBUG nova.network.neutron [-] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.281 239942 INFO nova.compute.manager [-] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Took 1.76 seconds to deallocate network for instance.#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.326 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.327 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.336 239942 DEBUG nova.compute.manager [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.337 239942 DEBUG oslo_concurrency.lockutils [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.337 239942 DEBUG oslo_concurrency.lockutils [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.338 239942 DEBUG oslo_concurrency.lockutils [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.338 239942 DEBUG nova.compute.manager [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] No waiting events found dispatching network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.339 239942 WARNING nova.compute.manager [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received unexpected event network-vif-plugged-f1498a6d-42eb-444b-9b53-825529f5cb1c for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.339 239942 DEBUG nova.compute.manager [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.340 239942 DEBUG nova.compute.manager [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing instance network info cache due to event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.340 239942 DEBUG oslo_concurrency.lockutils [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.341 239942 DEBUG oslo_concurrency.lockutils [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.341 239942 DEBUG nova.network.neutron [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.353 239942 DEBUG nova.compute.manager [req-bde66fa1-bc8b-4fbf-82ae-058451b5bd62 req-05b9d4be-9543-4c96-a05e-811457bd42b2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Received event network-vif-deleted-f1498a6d-42eb-444b-9b53-825529f5cb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.406 239942 DEBUG oslo_concurrency.processutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:48:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744813650' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.933 239942 DEBUG oslo_concurrency.processutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.938 239942 DEBUG nova.compute.provider_tree [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.954 239942 DEBUG nova.scheduler.client.report [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.974 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:31 np0005603435 nova_compute[239938]: 2026-01-31 04:48:31.996 239942 INFO nova.scheduler.client.report [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Deleted allocations for instance 2de06a6e-707c-434b-980d-ab52c01abb9e#033[00m
Jan 30 23:48:32 np0005603435 nova_compute[239938]: 2026-01-31 04:48:32.086 239942 DEBUG oslo_concurrency.lockutils [None req-13192428-fad2-4984-9714-97ad27da514a 0b66a987b14d4c37aedbb2fe48fd1547 2bb69332e8af48ee847370d546eaee1e - - default default] Lock "2de06a6e-707c-434b-980d-ab52c01abb9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/666791600' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/666791600' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 25 KiB/s wr, 206 op/s
Jan 30 23:48:33 np0005603435 nova_compute[239938]: 2026-01-31 04:48:33.415 239942 DEBUG nova.compute.manager [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:33 np0005603435 nova_compute[239938]: 2026-01-31 04:48:33.416 239942 DEBUG nova.compute.manager [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing instance network info cache due to event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:48:33 np0005603435 nova_compute[239938]: 2026-01-31 04:48:33.417 239942 DEBUG oslo_concurrency.lockutils [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:48:33 np0005603435 nova_compute[239938]: 2026-01-31 04:48:33.469 239942 DEBUG nova.network.neutron [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updated VIF entry in instance network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:48:33 np0005603435 nova_compute[239938]: 2026-01-31 04:48:33.470 239942 DEBUG nova.network.neutron [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updating instance_info_cache with network_info: [{"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:33 np0005603435 nova_compute[239938]: 2026-01-31 04:48:33.489 239942 DEBUG oslo_concurrency.lockutils [req-9fe90873-1c4c-4c53-80c8-7ec0f82296a8 req-c0e10e8c-af07-4d9a-8b68-d6afd36d4123 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:48:33 np0005603435 nova_compute[239938]: 2026-01-31 04:48:33.490 239942 DEBUG oslo_concurrency.lockutils [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:48:33 np0005603435 nova_compute[239938]: 2026-01-31 04:48:33.491 239942 DEBUG nova.network.neutron [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.053 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.243 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/907718772' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/907718772' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.452 239942 DEBUG nova.network.neutron [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updated VIF entry in instance network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.453 239942 DEBUG nova.network.neutron [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updating instance_info_cache with network_info: [{"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.467 239942 DEBUG oslo_concurrency.lockutils [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.467 239942 DEBUG nova.compute.manager [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.468 239942 DEBUG nova.compute.manager [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing instance network info cache due to event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.468 239942 DEBUG oslo_concurrency.lockutils [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.468 239942 DEBUG oslo_concurrency.lockutils [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:48:34 np0005603435 nova_compute[239938]: 2026-01-31 04:48:34.469 239942 DEBUG nova.network.neutron [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:48:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 191 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 479 KiB/s wr, 196 op/s
Jan 30 23:48:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4219393206' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:35 np0005603435 nova_compute[239938]: 2026-01-31 04:48:35.652 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769834900.6515715, 80f921cb-ec48-41f8-88b0-3ba2a51efd0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:48:35 np0005603435 nova_compute[239938]: 2026-01-31 04:48:35.653 239942 INFO nova.compute.manager [-] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:48:35 np0005603435 nova_compute[239938]: 2026-01-31 04:48:35.668 239942 DEBUG nova.compute.manager [None req-135c706e-2e7d-415c-bb59-36b8fdb561d2 - - - - - -] [instance: 80f921cb-ec48-41f8-88b0-3ba2a51efd0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:48:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 30 23:48:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 30 23:48:35 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 30 23:48:36 np0005603435 nova_compute[239938]: 2026-01-31 04:48:36.024 239942 DEBUG nova.network.neutron [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updated VIF entry in instance network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:48:36 np0005603435 nova_compute[239938]: 2026-01-31 04:48:36.025 239942 DEBUG nova.network.neutron [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updating instance_info_cache with network_info: [{"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:36 np0005603435 nova_compute[239938]: 2026-01-31 04:48:36.042 239942 DEBUG oslo_concurrency.lockutils [req-fdd1b045-4213-4f94-bb08-480d1355b65d req-07bb07ba-a1ca-47b8-ace5-12a428041c25 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:48:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 30 23:48:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 30 23:48:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 30 23:48:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:48:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:48:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:48:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:48:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:48:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:48:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 230 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.4 MiB/s wr, 186 op/s
Jan 30 23:48:37 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:37Z|00075|binding|INFO|Releasing lport 6f14bcd8-9cab-43f6-9bdc-0bd7e0c87151 from this chassis (sb_readonly=0)
Jan 30 23:48:37 np0005603435 nova_compute[239938]: 2026-01-31 04:48:37.432 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:38 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:38Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:08:f8 10.100.0.7
Jan 30 23:48:38 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:38Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:08:f8 10.100.0.7
Jan 30 23:48:39 np0005603435 nova_compute[239938]: 2026-01-31 04:48:39.056 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:39 np0005603435 nova_compute[239938]: 2026-01-31 04:48:39.244 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 280 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 6.4 MiB/s wr, 237 op/s
Jan 30 23:48:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/192006412' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:40 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:40Z|00076|binding|INFO|Releasing lport 6f14bcd8-9cab-43f6-9bdc-0bd7e0c87151 from this chassis (sb_readonly=0)
Jan 30 23:48:40 np0005603435 nova_compute[239938]: 2026-01-31 04:48:40.712 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 30 23:48:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 30 23:48:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 30 23:48:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 299 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 9.6 MiB/s wr, 224 op/s
Jan 30 23:48:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 30 23:48:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 30 23:48:42 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 30 23:48:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413180779' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413180779' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 348 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 8.5 MiB/s wr, 261 op/s
Jan 30 23:48:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 30 23:48:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 30 23:48:43 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.022 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769834909.0210717, 2de06a6e-707c-434b-980d-ab52c01abb9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.023 239942 INFO nova.compute.manager [-] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.109 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.176 239942 DEBUG nova.compute.manager [None req-a412fb28-ac77-4f06-b7eb-ae91e45174ed - - - - - -] [instance: 2de06a6e-707c-434b-980d-ab52c01abb9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.247 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.590 239942 DEBUG nova.compute.manager [req-e17d2582-906c-481d-acb9-e48eb5a5cf31 req-b53726b8-1466-40f3-a4a9-c5bfe52d41d0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.591 239942 DEBUG nova.compute.manager [req-e17d2582-906c-481d-acb9-e48eb5a5cf31 req-b53726b8-1466-40f3-a4a9-c5bfe52d41d0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing instance network info cache due to event network-changed-1d62e775-3c70-46e5-a96d-3caf6e7cfc53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.592 239942 DEBUG oslo_concurrency.lockutils [req-e17d2582-906c-481d-acb9-e48eb5a5cf31 req-b53726b8-1466-40f3-a4a9-c5bfe52d41d0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.592 239942 DEBUG oslo_concurrency.lockutils [req-e17d2582-906c-481d-acb9-e48eb5a5cf31 req-b53726b8-1466-40f3-a4a9-c5bfe52d41d0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.592 239942 DEBUG nova.network.neutron [req-e17d2582-906c-481d-acb9-e48eb5a5cf31 req-b53726b8-1466-40f3-a4a9-c5bfe52d41d0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Refreshing network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.675 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquiring lock "62dcf699-1417-4b1e-b107-3527e61c68a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.675 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.676 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquiring lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.676 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.677 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.679 239942 INFO nova.compute.manager [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Terminating instance#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.681 239942 DEBUG nova.compute.manager [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:48:44 np0005603435 kernel: tap1d62e775-3c (unregistering): left promiscuous mode
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.737 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:44 np0005603435 NetworkManager[49097]: <info>  [1769834924.7381] device (tap1d62e775-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.745 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:44Z|00077|binding|INFO|Releasing lport 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 from this chassis (sb_readonly=0)
Jan 30 23:48:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:44Z|00078|binding|INFO|Setting lport 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 down in Southbound
Jan 30 23:48:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:48:44Z|00079|binding|INFO|Removing iface tap1d62e775-3c ovn-installed in OVS
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.748 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:44.756 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:08:f8 10.100.0.7'], port_security=['fa:16:3e:2b:08:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '62dcf699-1417-4b1e-b107-3527e61c68a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25a68b42-b744-40ad-b5c6-c5e70764e097', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9c98e89d4ac44c38b41aa3d603a9b0a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c800788-ad95-4357-9798-0a13317252e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a2b0316-5cca-492a-8f5a-003ff4fc2b30, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=1d62e775-3c70-46e5-a96d-3caf6e7cfc53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:48:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:44.758 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53 in datapath 25a68b42-b744-40ad-b5c6-c5e70764e097 unbound from our chassis#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.759 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:44.760 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25a68b42-b744-40ad-b5c6-c5e70764e097, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:48:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:44.762 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ff56c8-09f1-4014-9e2c-6d54ce9f3097]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:44.762 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097 namespace which is not needed anymore#033[00m
Jan 30 23:48:44 np0005603435 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 30 23:48:44 np0005603435 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 12.048s CPU time.
Jan 30 23:48:44 np0005603435 systemd-machined[208030]: Machine qemu-7-instance-00000007 terminated.
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.925 239942 INFO nova.virt.libvirt.driver [-] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Instance destroyed successfully.#033[00m
Jan 30 23:48:44 np0005603435 neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097[252751]: [NOTICE]   (252755) : haproxy version is 2.8.14-c23fe91
Jan 30 23:48:44 np0005603435 neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097[252751]: [NOTICE]   (252755) : path to executable is /usr/sbin/haproxy
Jan 30 23:48:44 np0005603435 neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097[252751]: [WARNING]  (252755) : Exiting Master process...
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.926 239942 DEBUG nova.objects.instance [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lazy-loading 'resources' on Instance uuid 62dcf699-1417-4b1e-b107-3527e61c68a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:48:44 np0005603435 neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097[252751]: [ALERT]    (252755) : Current worker (252757) exited with code 143 (Terminated)
Jan 30 23:48:44 np0005603435 neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097[252751]: [WARNING]  (252755) : All workers exited. Exiting... (0)
Jan 30 23:48:44 np0005603435 systemd[1]: libpod-2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159.scope: Deactivated successfully.
Jan 30 23:48:44 np0005603435 podman[252916]: 2026-01-31 04:48:44.938854666 +0000 UTC m=+0.070404112 container died 2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.941 239942 DEBUG nova.virt.libvirt.vif [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:48:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-803424544',display_name='tempest-TestVolumeBackupRestore-server-803424544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-803424544',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCP1m7eJMGiS8harXOXi6bVep4rPBK/7p7pgc2N2rfY7Yh91jUe7m0NHPNsM5XRn6r1ZxrhSUckERbS/1BFLnjE+Mjher/8KbGtg/8DwssuxOIEaVMVMFX1Pkwd5lI8s6g==',key_name='tempest-TestVolumeBackupRestore-2078203097',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:48:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9c98e89d4ac44c38b41aa3d603a9b0a',ramdisk_id='',reservation_id='r-ze27pc0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1693640160',owner_user_name='tempest-TestVolumeBackupRestore-1693640160-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:48:26Z,user_data=None,user_id='51ff78d1385146c598709f382eb4bc29',uuid=62dcf699-1417-4b1e-b107-3527e61c68a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.942 239942 DEBUG nova.network.os_vif_util [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Converting VIF {"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.943 239942 DEBUG nova.network.os_vif_util [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:f8,bridge_name='br-int',has_traffic_filtering=True,id=1d62e775-3c70-46e5-a96d-3caf6e7cfc53,network=Network(25a68b42-b744-40ad-b5c6-c5e70764e097),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d62e775-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.944 239942 DEBUG os_vif [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:f8,bridge_name='br-int',has_traffic_filtering=True,id=1d62e775-3c70-46e5-a96d-3caf6e7cfc53,network=Network(25a68b42-b744-40ad-b5c6-c5e70764e097),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d62e775-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.947 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.947 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d62e775-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.953 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:48:44 np0005603435 nova_compute[239938]: 2026-01-31 04:48:44.959 239942 INFO os_vif [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:08:f8,bridge_name='br-int',has_traffic_filtering=True,id=1d62e775-3c70-46e5-a96d-3caf6e7cfc53,network=Network(25a68b42-b744-40ad-b5c6-c5e70764e097),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d62e775-3c')#033[00m
Jan 30 23:48:44 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159-userdata-shm.mount: Deactivated successfully.
Jan 30 23:48:44 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ea43235b57ac61ed3d295827540852601339c85f7e098230dfb52d20e58933d3-merged.mount: Deactivated successfully.
Jan 30 23:48:44 np0005603435 podman[252916]: 2026-01-31 04:48:44.98538311 +0000 UTC m=+0.116932536 container cleanup 2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:48:44 np0005603435 systemd[1]: libpod-conmon-2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159.scope: Deactivated successfully.
Jan 30 23:48:45 np0005603435 podman[252969]: 2026-01-31 04:48:45.061894345 +0000 UTC m=+0.057025854 container remove 2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.068 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[234d77ad-1a3b-45b3-948c-95844b5b3dc3]: (4, ('Sat Jan 31 04:48:44 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097 (2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159)\n2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159\nSat Jan 31 04:48:44 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097 (2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159)\n2efc14d30e5adbb784677084e358d8007614bd920fb770f2a80db670d3793159\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.070 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3345bce4-ed87-4dd7-aa7e-09a2fb478689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.073 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25a68b42-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:45 np0005603435 nova_compute[239938]: 2026-01-31 04:48:45.076 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:45 np0005603435 kernel: tap25a68b42-b0: left promiscuous mode
Jan 30 23:48:45 np0005603435 nova_compute[239938]: 2026-01-31 04:48:45.084 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.088 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[146be813-3b6b-4406-8cf7-e8112bca4658]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.103 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d73ec6eb-2b2a-42bd-95fe-083d79d7e15c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.105 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[481cb123-15c3-433e-856c-ecd9cc8dc7a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.125 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[51962733-b52a-4eca-b7ed-c5920afd26f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396008, 'reachable_time': 33409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252985, 'error': None, 'target': 'ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.129 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-25a68b42-b744-40ad-b5c6-c5e70764e097 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:48:45 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:45.129 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[acda30ba-d94f-43b4-93f0-39cf959244b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:48:45 np0005603435 systemd[1]: run-netns-ovnmeta\x2d25a68b42\x2db744\x2d40ad\x2db5c6\x2dc5e70764e097.mount: Deactivated successfully.
Jan 30 23:48:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 341 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 220 op/s
Jan 30 23:48:45 np0005603435 nova_compute[239938]: 2026-01-31 04:48:45.256 239942 INFO nova.virt.libvirt.driver [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Deleting instance files /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8_del#033[00m
Jan 30 23:48:45 np0005603435 nova_compute[239938]: 2026-01-31 04:48:45.257 239942 INFO nova.virt.libvirt.driver [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Deletion of /var/lib/nova/instances/62dcf699-1417-4b1e-b107-3527e61c68a8_del complete#033[00m
Jan 30 23:48:45 np0005603435 nova_compute[239938]: 2026-01-31 04:48:45.310 239942 INFO nova.compute.manager [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:48:45 np0005603435 nova_compute[239938]: 2026-01-31 04:48:45.311 239942 DEBUG oslo.service.loopingcall [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:48:45 np0005603435 nova_compute[239938]: 2026-01-31 04:48:45.312 239942 DEBUG nova.compute.manager [-] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:48:45 np0005603435 nova_compute[239938]: 2026-01-31 04:48:45.312 239942 DEBUG nova.network.neutron [-] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:48:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 30 23:48:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 30 23:48:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.483 239942 DEBUG nova.network.neutron [req-e17d2582-906c-481d-acb9-e48eb5a5cf31 req-b53726b8-1466-40f3-a4a9-c5bfe52d41d0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updated VIF entry in instance network info cache for port 1d62e775-3c70-46e5-a96d-3caf6e7cfc53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.483 239942 DEBUG nova.network.neutron [req-e17d2582-906c-481d-acb9-e48eb5a5cf31 req-b53726b8-1466-40f3-a4a9-c5bfe52d41d0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updating instance_info_cache with network_info: [{"id": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "address": "fa:16:3e:2b:08:f8", "network": {"id": "25a68b42-b744-40ad-b5c6-c5e70764e097", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-639296986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9c98e89d4ac44c38b41aa3d603a9b0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d62e775-3c", "ovs_interfaceid": "1d62e775-3c70-46e5-a96d-3caf6e7cfc53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.506 239942 DEBUG nova.network.neutron [-] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.509 239942 DEBUG oslo_concurrency.lockutils [req-e17d2582-906c-481d-acb9-e48eb5a5cf31 req-b53726b8-1466-40f3-a4a9-c5bfe52d41d0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-62dcf699-1417-4b1e-b107-3527e61c68a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.543 239942 INFO nova.compute.manager [-] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Took 1.23 seconds to deallocate network for instance.#033[00m
Jan 30 23:48:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1667032829' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1667032829' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.715 239942 DEBUG nova.compute.manager [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-vif-unplugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.716 239942 DEBUG oslo_concurrency.lockutils [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.719 239942 DEBUG oslo_concurrency.lockutils [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.719 239942 DEBUG oslo_concurrency.lockutils [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.720 239942 DEBUG nova.compute.manager [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] No waiting events found dispatching network-vif-unplugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.720 239942 DEBUG nova.compute.manager [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-vif-unplugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.720 239942 DEBUG nova.compute.manager [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.721 239942 DEBUG oslo_concurrency.lockutils [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.721 239942 DEBUG oslo_concurrency.lockutils [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.722 239942 DEBUG oslo_concurrency.lockutils [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.722 239942 DEBUG nova.compute.manager [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] No waiting events found dispatching network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.723 239942 WARNING nova.compute.manager [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received unexpected event network-vif-plugged-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 for instance with vm_state active and task_state deleting.#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.723 239942 DEBUG nova.compute.manager [req-b8295fd7-82d2-4bdf-98a9-71df73c29928 req-e4deb022-5d74-4859-b4a3-3f767c68a4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Received event network-vif-deleted-1d62e775-3c70-46e5-a96d-3caf6e7cfc53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.808 239942 INFO nova.compute.manager [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Took 0.26 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.853 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.854 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:46 np0005603435 nova_compute[239938]: 2026-01-31 04:48:46.904 239942 DEBUG oslo_concurrency.processutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:48:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 296 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.7 MiB/s wr, 219 op/s
Jan 30 23:48:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:48:47 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2179965767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:48:47 np0005603435 nova_compute[239938]: 2026-01-31 04:48:47.481 239942 DEBUG oslo_concurrency.processutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:48:47 np0005603435 nova_compute[239938]: 2026-01-31 04:48:47.490 239942 DEBUG nova.compute.provider_tree [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:48:47 np0005603435 nova_compute[239938]: 2026-01-31 04:48:47.513 239942 DEBUG nova.scheduler.client.report [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:48:47 np0005603435 nova_compute[239938]: 2026-01-31 04:48:47.542 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:47 np0005603435 nova_compute[239938]: 2026-01-31 04:48:47.594 239942 INFO nova.scheduler.client.report [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Deleted allocations for instance 62dcf699-1417-4b1e-b107-3527e61c68a8#033[00m
Jan 30 23:48:47 np0005603435 nova_compute[239938]: 2026-01-31 04:48:47.679 239942 DEBUG oslo_concurrency.lockutils [None req-4337bf8b-ae84-4c73-9a75-52b11a0f9ecc 51ff78d1385146c598709f382eb4bc29 b9c98e89d4ac44c38b41aa3d603a9b0a - - default default] Lock "62dcf699-1417-4b1e-b107-3527e61c68a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:48.189 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:48:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:48.191 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:48:48 np0005603435 nova_compute[239938]: 2026-01-31 04:48:48.190 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 30 23:48:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 30 23:48:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 30 23:48:49 np0005603435 podman[253009]: 2026-01-31 04:48:49.117738872 +0000 UTC m=+0.078290469 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 30 23:48:49 np0005603435 podman[253010]: 2026-01-31 04:48:49.152509437 +0000 UTC m=+0.108110596 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:48:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 234 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 535 KiB/s rd, 1.1 MiB/s wr, 142 op/s
Jan 30 23:48:49 np0005603435 nova_compute[239938]: 2026-01-31 04:48:49.295 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939057443' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939057443' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/677347461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/784497738' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/784497738' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 30 23:48:49 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 30 23:48:49 np0005603435 nova_compute[239938]: 2026-01-31 04:48:49.951 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 30 23:48:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 30 23:48:50 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 30 23:48:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 190 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 37 KiB/s wr, 189 op/s
Jan 30 23:48:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 30 23:48:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 30 23:48:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 30 23:48:52 np0005603435 nova_compute[239938]: 2026-01-31 04:48:52.878 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:52 np0005603435 nova_compute[239938]: 2026-01-31 04:48:52.971 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 174 KiB/s rd, 13 KiB/s wr, 251 op/s
Jan 30 23:48:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:48:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 3664 syncs, 3.33 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6307 writes, 20K keys, 6307 commit groups, 1.0 writes per commit group, ingest: 12.00 MB, 0.02 MB/s#012Interval WAL: 6307 writes, 2644 syncs, 2.39 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 30 23:48:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:54.193 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:48:54 np0005603435 nova_compute[239938]: 2026-01-31 04:48:54.297 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.327981) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834934328038, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2293, "num_deletes": 258, "total_data_size": 3417978, "memory_usage": 3476904, "flush_reason": "Manual Compaction"}
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834934348880, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3365907, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21401, "largest_seqno": 23693, "table_properties": {"data_size": 3355253, "index_size": 6892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22539, "raw_average_key_size": 21, "raw_value_size": 3333743, "raw_average_value_size": 3112, "num_data_blocks": 304, "num_entries": 1071, "num_filter_entries": 1071, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769834754, "oldest_key_time": 1769834754, "file_creation_time": 1769834934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 21061 microseconds, and 8070 cpu microseconds.
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.349040) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3365907 bytes OK
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.349129) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.351098) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.351121) EVENT_LOG_v1 {"time_micros": 1769834934351113, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.351142) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3408161, prev total WAL file size 3408161, number of live WAL files 2.
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.352476) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3287KB)], [50(7421KB)]
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834934352525, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10965021, "oldest_snapshot_seqno": -1}
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5102 keys, 9161151 bytes, temperature: kUnknown
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834934406287, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9161151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9123491, "index_size": 23790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 125752, "raw_average_key_size": 24, "raw_value_size": 9028039, "raw_average_value_size": 1769, "num_data_blocks": 985, "num_entries": 5102, "num_filter_entries": 5102, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769834934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.406526) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9161151 bytes
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.407896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.7 rd, 170.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5629, records dropped: 527 output_compression: NoCompression
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.407924) EVENT_LOG_v1 {"time_micros": 1769834934407911, "job": 26, "event": "compaction_finished", "compaction_time_micros": 53835, "compaction_time_cpu_micros": 25610, "output_level": 6, "num_output_files": 1, "total_output_size": 9161151, "num_input_records": 5629, "num_output_records": 5102, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834934408568, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769834934409778, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.352377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.409860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.409866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.409869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.409872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:48:54.409875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/250862569' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/250862569' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:54 np0005603435 nova_compute[239938]: 2026-01-31 04:48:54.969 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3350208639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3350208639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:48:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171414385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:48:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 10 KiB/s wr, 182 op/s
Jan 30 23:48:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:55.913 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:48:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:55.913 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:48:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:48:55.913 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3731857220' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3731857220' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 30 23:48:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 30 23:48:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 9.5 KiB/s wr, 154 op/s
Jan 30 23:48:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 30 23:48:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 30 23:48:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 30 23:48:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:48:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.7 total, 600.0 interval#012Cumulative writes: 14K writes, 55K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4138 syncs, 3.47 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5931 writes, 20K keys, 5931 commit groups, 1.0 writes per commit group, ingest: 13.45 MB, 0.02 MB/s#012Interval WAL: 5931 writes, 2427 syncs, 2.44 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 30 23:48:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:48:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3887387615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:48:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:48:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3887387615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:48:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 3.0 KiB/s wr, 107 op/s
Jan 30 23:48:59 np0005603435 nova_compute[239938]: 2026-01-31 04:48:59.300 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:48:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 30 23:48:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 30 23:48:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 30 23:48:59 np0005603435 nova_compute[239938]: 2026-01-31 04:48:59.924 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769834924.922342, 62dcf699-1417-4b1e-b107-3527e61c68a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:48:59 np0005603435 nova_compute[239938]: 2026-01-31 04:48:59.925 239942 INFO nova.compute.manager [-] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:48:59 np0005603435 nova_compute[239938]: 2026-01-31 04:48:59.950 239942 DEBUG nova.compute.manager [None req-332e1fe8-0c2a-4698-ad6f-2c8b29b1da70 - - - - - -] [instance: 62dcf699-1417-4b1e-b107-3527e61c68a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:48:59 np0005603435 nova_compute[239938]: 2026-01-31 04:48:59.971 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 3.1 KiB/s wr, 151 op/s
Jan 30 23:49:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 30 23:49:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 30 23:49:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 30 23:49:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3335006220' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3335006220' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 3.3 KiB/s wr, 152 op/s
Jan 30 23:49:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 30 23:49:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 30 23:49:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 30 23:49:04 np0005603435 nova_compute[239938]: 2026-01-31 04:49:04.301 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:04 np0005603435 nova_compute[239938]: 2026-01-31 04:49:04.973 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:49:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.1 total, 600.0 interval#012Cumulative writes: 9928 writes, 41K keys, 9928 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 9928 writes, 2654 syncs, 3.74 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4116 writes, 16K keys, 4116 commit groups, 1.0 writes per commit group, ingest: 8.76 MB, 0.01 MB/s#012Interval WAL: 4116 writes, 1700 syncs, 2.42 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:49:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:49:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 3.2 KiB/s wr, 102 op/s
Jan 30 23:49:05 np0005603435 podman[253271]: 2026-01-31 04:49:05.633107642 +0000 UTC m=+0.061212903 container create 0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 30 23:49:05 np0005603435 systemd[1]: Started libpod-conmon-0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02.scope.
Jan 30 23:49:05 np0005603435 podman[253271]: 2026-01-31 04:49:05.607619217 +0000 UTC m=+0.035724528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:49:05 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:49:05 np0005603435 podman[253271]: 2026-01-31 04:49:05.728601128 +0000 UTC m=+0.156706439 container init 0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:49:05 np0005603435 podman[253271]: 2026-01-31 04:49:05.737439788 +0000 UTC m=+0.165545059 container start 0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:49:05 np0005603435 podman[253271]: 2026-01-31 04:49:05.741470493 +0000 UTC m=+0.169575764 container attach 0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:49:05 np0005603435 serene_moore[253287]: 167 167
Jan 30 23:49:05 np0005603435 systemd[1]: libpod-0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02.scope: Deactivated successfully.
Jan 30 23:49:05 np0005603435 podman[253271]: 2026-01-31 04:49:05.745219172 +0000 UTC m=+0.173324443 container died 0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:49:05 np0005603435 systemd[1]: var-lib-containers-storage-overlay-52ca2911783826847823792a69d229430d4f78e67935d5ed50b8181f035cc9da-merged.mount: Deactivated successfully.
Jan 30 23:49:05 np0005603435 podman[253271]: 2026-01-31 04:49:05.791140662 +0000 UTC m=+0.219245923 container remove 0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:49:05 np0005603435 systemd[1]: libpod-conmon-0f7fbef3ac726f398c368ef8c1180906221ff1b7e386ca05cff0952e92232c02.scope: Deactivated successfully.
Jan 30 23:49:05 np0005603435 podman[253312]: 2026-01-31 04:49:05.958471252 +0000 UTC m=+0.054142375 container create d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:49:06 np0005603435 systemd[1]: Started libpod-conmon-d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13.scope.
Jan 30 23:49:06 np0005603435 podman[253312]: 2026-01-31 04:49:05.930852787 +0000 UTC m=+0.026523960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:49:06 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:49:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f49c744c52fa426778161fbaa6f4e312a71a3eff8d7ccaadac0883d027bce59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f49c744c52fa426778161fbaa6f4e312a71a3eff8d7ccaadac0883d027bce59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f49c744c52fa426778161fbaa6f4e312a71a3eff8d7ccaadac0883d027bce59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f49c744c52fa426778161fbaa6f4e312a71a3eff8d7ccaadac0883d027bce59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f49c744c52fa426778161fbaa6f4e312a71a3eff8d7ccaadac0883d027bce59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:06 np0005603435 podman[253312]: 2026-01-31 04:49:06.061338553 +0000 UTC m=+0.157009726 container init d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:49:06 np0005603435 podman[253312]: 2026-01-31 04:49:06.077127878 +0000 UTC m=+0.172799001 container start d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sanderson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 30 23:49:06 np0005603435 podman[253312]: 2026-01-31 04:49:06.080778385 +0000 UTC m=+0.176449568 container attach d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sanderson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:49:06
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:49:06 np0005603435 beautiful_sanderson[253329]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:49:06 np0005603435 beautiful_sanderson[253329]: --> All data devices are unavailable
Jan 30 23:49:06 np0005603435 systemd[1]: libpod-d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13.scope: Deactivated successfully.
Jan 30 23:49:06 np0005603435 podman[253312]: 2026-01-31 04:49:06.566721185 +0000 UTC m=+0.662392298 container died d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:49:06 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9f49c744c52fa426778161fbaa6f4e312a71a3eff8d7ccaadac0883d027bce59-merged.mount: Deactivated successfully.
Jan 30 23:49:06 np0005603435 podman[253312]: 2026-01-31 04:49:06.62380107 +0000 UTC m=+0.719472203 container remove d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sanderson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:49:06 np0005603435 systemd[1]: libpod-conmon-d3778738e4a6028ebf2af83142a40dc6e676768e42e4d9d815e58498a0faca13.scope: Deactivated successfully.
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 30 23:49:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:49:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:49:07 np0005603435 podman[253424]: 2026-01-31 04:49:07.094287132 +0000 UTC m=+0.054344870 container create b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:49:07 np0005603435 systemd[1]: Started libpod-conmon-b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c.scope.
Jan 30 23:49:07 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:49:07 np0005603435 podman[253424]: 2026-01-31 04:49:07.07435433 +0000 UTC m=+0.034412048 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:49:07 np0005603435 podman[253424]: 2026-01-31 04:49:07.17845959 +0000 UTC m=+0.138517348 container init b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:49:07 np0005603435 podman[253424]: 2026-01-31 04:49:07.186956241 +0000 UTC m=+0.147013999 container start b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:49:07 np0005603435 unruffled_shaw[253440]: 167 167
Jan 30 23:49:07 np0005603435 podman[253424]: 2026-01-31 04:49:07.190714501 +0000 UTC m=+0.150772209 container attach b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 30 23:49:07 np0005603435 systemd[1]: libpod-b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c.scope: Deactivated successfully.
Jan 30 23:49:07 np0005603435 podman[253424]: 2026-01-31 04:49:07.192309868 +0000 UTC m=+0.152367576 container died b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:49:07 np0005603435 systemd[1]: var-lib-containers-storage-overlay-64211f660309dd69c3b203853abf054d319a54f3f3beb763bc60c0457128f8cd-merged.mount: Deactivated successfully.
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:49:07 np0005603435 podman[253424]: 2026-01-31 04:49:07.24083592 +0000 UTC m=+0.200893618 container remove b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:49:07 np0005603435 systemd[1]: libpod-conmon-b42ed7cb114e5e7540a97977846e53c554a42219fc508a59cb3e4b228003f77c.scope: Deactivated successfully.
Jan 30 23:49:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 5.3 KiB/s wr, 116 op/s
Jan 30 23:49:07 np0005603435 podman[253465]: 2026-01-31 04:49:07.455882603 +0000 UTC m=+0.067816141 container create 29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:49:07 np0005603435 systemd[1]: Started libpod-conmon-29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9.scope.
Jan 30 23:49:07 np0005603435 podman[253465]: 2026-01-31 04:49:07.425802229 +0000 UTC m=+0.037735857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:49:07 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:49:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c1afb41b839285fedb23b6614b48c8f82f233f1c06edb0b55b26dc6087b4f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c1afb41b839285fedb23b6614b48c8f82f233f1c06edb0b55b26dc6087b4f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c1afb41b839285fedb23b6614b48c8f82f233f1c06edb0b55b26dc6087b4f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:07 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c1afb41b839285fedb23b6614b48c8f82f233f1c06edb0b55b26dc6087b4f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:07 np0005603435 podman[253465]: 2026-01-31 04:49:07.547812264 +0000 UTC m=+0.159745872 container init 29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ishizaka, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:49:07 np0005603435 podman[253465]: 2026-01-31 04:49:07.56115352 +0000 UTC m=+0.173087088 container start 29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 30 23:49:07 np0005603435 podman[253465]: 2026-01-31 04:49:07.564783917 +0000 UTC m=+0.176717495 container attach 29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]: {
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:    "0": [
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:        {
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "devices": [
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "/dev/loop3"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            ],
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_name": "ceph_lv0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_size": "21470642176",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "name": "ceph_lv0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "tags": {
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cluster_name": "ceph",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.crush_device_class": "",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.encrypted": "0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.objectstore": "bluestore",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osd_id": "0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.type": "block",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.vdo": "0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.with_tpm": "0"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            },
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "type": "block",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "vg_name": "ceph_vg0"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:        }
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:    ],
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:    "1": [
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:        {
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "devices": [
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "/dev/loop4"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            ],
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_name": "ceph_lv1",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_size": "21470642176",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "name": "ceph_lv1",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "tags": {
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cluster_name": "ceph",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.crush_device_class": "",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.encrypted": "0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.objectstore": "bluestore",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osd_id": "1",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.type": "block",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.vdo": "0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.with_tpm": "0"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            },
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "type": "block",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "vg_name": "ceph_vg1"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:        }
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:    ],
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:    "2": [
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:        {
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "devices": [
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "/dev/loop5"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            ],
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_name": "ceph_lv2",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_size": "21470642176",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "name": "ceph_lv2",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "tags": {
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.cluster_name": "ceph",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.crush_device_class": "",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.encrypted": "0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.objectstore": "bluestore",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osd_id": "2",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.type": "block",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.vdo": "0",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:                "ceph.with_tpm": "0"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            },
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "type": "block",
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:            "vg_name": "ceph_vg2"
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:        }
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]:    ]
Jan 30 23:49:07 np0005603435 practical_ishizaka[253482]: }
Jan 30 23:49:07 np0005603435 systemd[1]: libpod-29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9.scope: Deactivated successfully.
Jan 30 23:49:07 np0005603435 podman[253465]: 2026-01-31 04:49:07.881879211 +0000 UTC m=+0.493812839 container died 29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:49:07 np0005603435 systemd[1]: var-lib-containers-storage-overlay-67c1afb41b839285fedb23b6614b48c8f82f233f1c06edb0b55b26dc6087b4f0-merged.mount: Deactivated successfully.
Jan 30 23:49:07 np0005603435 podman[253465]: 2026-01-31 04:49:07.932098872 +0000 UTC m=+0.544032440 container remove 29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ishizaka, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:49:07 np0005603435 systemd[1]: libpod-conmon-29d9459fd2b04f99c0014330e231eca00ffdd48e1ddeace5772a2613522f77c9.scope: Deactivated successfully.
Jan 30 23:49:08 np0005603435 podman[253565]: 2026-01-31 04:49:08.422030708 +0000 UTC m=+0.036834695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:49:08 np0005603435 podman[253565]: 2026-01-31 04:49:08.605155493 +0000 UTC m=+0.219959420 container create e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_yonath, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:49:08 np0005603435 systemd[1]: Started libpod-conmon-e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f.scope.
Jan 30 23:49:08 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:49:08 np0005603435 podman[253565]: 2026-01-31 04:49:08.802346182 +0000 UTC m=+0.417150119 container init e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:49:08 np0005603435 podman[253565]: 2026-01-31 04:49:08.811046419 +0000 UTC m=+0.425850356 container start e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:49:08 np0005603435 naughty_yonath[253582]: 167 167
Jan 30 23:49:08 np0005603435 systemd[1]: libpod-e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f.scope: Deactivated successfully.
Jan 30 23:49:08 np0005603435 podman[253565]: 2026-01-31 04:49:08.860516322 +0000 UTC m=+0.475320249 container attach e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_yonath, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:49:08 np0005603435 podman[253565]: 2026-01-31 04:49:08.861429904 +0000 UTC m=+0.476233841 container died e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_yonath, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:49:08 np0005603435 nova_compute[239938]: 2026-01-31 04:49:08.913 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:08 np0005603435 nova_compute[239938]: 2026-01-31 04:49:08.915 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.000 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.083 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.084 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.093 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.093 239942 INFO nova.compute.claims [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:49:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay-cea79ab0dc66ec09a89afee284619878d1da2f8a109488f8daf3813922e3f930-merged.mount: Deactivated successfully.
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.201 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.9 KiB/s wr, 70 op/s
Jan 30 23:49:09 np0005603435 podman[253565]: 2026-01-31 04:49:09.292713988 +0000 UTC m=+0.907517915 container remove e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.304 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:09 np0005603435 systemd[1]: libpod-conmon-e6c5da135643731116934dcc8d5f02c909d64354b173dfeb01eed095c58d799f.scope: Deactivated successfully.
Jan 30 23:49:09 np0005603435 podman[253626]: 2026-01-31 04:49:09.472218277 +0000 UTC m=+0.058975660 container create eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_curran, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:49:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/203677017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/203677017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:09 np0005603435 systemd[1]: Started libpod-conmon-eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030.scope.
Jan 30 23:49:09 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:49:09 np0005603435 podman[253626]: 2026-01-31 04:49:09.4449309 +0000 UTC m=+0.031688313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:49:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c70ca9873a334b5e08716f52dc96e0cef15b160a6f0651245fe439bfead38c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c70ca9873a334b5e08716f52dc96e0cef15b160a6f0651245fe439bfead38c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c70ca9873a334b5e08716f52dc96e0cef15b160a6f0651245fe439bfead38c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c70ca9873a334b5e08716f52dc96e0cef15b160a6f0651245fe439bfead38c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:09 np0005603435 podman[253626]: 2026-01-31 04:49:09.560670676 +0000 UTC m=+0.147428129 container init eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:49:09 np0005603435 podman[253626]: 2026-01-31 04:49:09.571479432 +0000 UTC m=+0.158236815 container start eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_curran, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:49:09 np0005603435 podman[253626]: 2026-01-31 04:49:09.576178814 +0000 UTC m=+0.162936217 container attach eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:49:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:49:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572033431' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.797 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.805 239942 DEBUG nova.compute.provider_tree [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.825 239942 DEBUG nova.scheduler.client.report [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.849 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.850 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.897 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.897 239942 DEBUG nova.network.neutron [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.919 239942 INFO nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:49:09 np0005603435 nova_compute[239938]: 2026-01-31 04:49:09.936 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.013 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.028 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.029 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.029 239942 INFO nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Creating image(s)#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.050 239942 DEBUG nova.storage.rbd_utils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.075 239942 DEBUG nova.storage.rbd_utils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.099 239942 DEBUG nova.storage.rbd_utils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.103 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.155 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.156 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.156 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.157 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.177 239942 DEBUG nova.storage.rbd_utils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.182 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.194 239942 DEBUG nova.policy [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f51271330a6d46498b473f0d2595c3ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b8b11aff4b494f4eb1376cfe5754bac8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:49:10 np0005603435 lvm[253798]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:49:10 np0005603435 lvm[253801]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:49:10 np0005603435 lvm[253801]: VG ceph_vg1 finished
Jan 30 23:49:10 np0005603435 lvm[253798]: VG ceph_vg0 finished
Jan 30 23:49:10 np0005603435 lvm[253818]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:49:10 np0005603435 lvm[253818]: VG ceph_vg2 finished
Jan 30 23:49:10 np0005603435 inspiring_curran[253643]: {}
Jan 30 23:49:10 np0005603435 systemd[1]: libpod-eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030.scope: Deactivated successfully.
Jan 30 23:49:10 np0005603435 podman[253626]: 2026-01-31 04:49:10.373484002 +0000 UTC m=+0.960241395 container died eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:49:10 np0005603435 systemd[1]: libpod-eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030.scope: Consumed 1.087s CPU time.
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.401 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-06c70ca9873a334b5e08716f52dc96e0cef15b160a6f0651245fe439bfead38c-merged.mount: Deactivated successfully.
Jan 30 23:49:10 np0005603435 podman[253626]: 2026-01-31 04:49:10.426881679 +0000 UTC m=+1.013639052 container remove eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_curran, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:49:10 np0005603435 systemd[1]: libpod-conmon-eb81693fbbb78a882a54e354167fd78d9d700fb8861dbb71dcfd47201990d030.scope: Deactivated successfully.
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.472 239942 DEBUG nova.storage.rbd_utils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] resizing rbd image 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.556 239942 DEBUG nova.objects.instance [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'migration_context' on Instance uuid 3dfd6853-c0e1-446c-9f5d-097c8af910db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.578 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.579 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Ensure instance console log exists: /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.580 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.580 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:10 np0005603435 nova_compute[239938]: 2026-01-31 04:49:10.580 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:10 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:49:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 4.1 KiB/s wr, 67 op/s
Jan 30 23:49:11 np0005603435 nova_compute[239938]: 2026-01-31 04:49:11.505 239942 DEBUG nova.network.neutron [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Successfully created port: 84cc8fc9-7d52-4528-bad3-524644ec103e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:49:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 30 23:49:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 30 23:49:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 30 23:49:12 np0005603435 nova_compute[239938]: 2026-01-31 04:49:12.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:12 np0005603435 nova_compute[239938]: 2026-01-31 04:49:12.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:49:12 np0005603435 nova_compute[239938]: 2026-01-31 04:49:12.932 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:49:12 np0005603435 nova_compute[239938]: 2026-01-31 04:49:12.932 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:12 np0005603435 nova_compute[239938]: 2026-01-31 04:49:12.933 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.105 239942 DEBUG nova.network.neutron [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Successfully updated port: 84cc8fc9-7d52-4528-bad3-524644ec103e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.120 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.120 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquired lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.120 239942 DEBUG nova.network.neutron [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.200 239942 DEBUG nova.compute.manager [req-6a80d9b2-f847-4346-93b2-e7a0f7c11425 req-607ef584-0e90-4754-a4dd-13cce3150145 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received event network-changed-84cc8fc9-7d52-4528-bad3-524644ec103e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.200 239942 DEBUG nova.compute.manager [req-6a80d9b2-f847-4346-93b2-e7a0f7c11425 req-607ef584-0e90-4754-a4dd-13cce3150145 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Refreshing instance network info cache due to event network-changed-84cc8fc9-7d52-4528-bad3-524644ec103e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.200 239942 DEBUG oslo_concurrency.lockutils [req-6a80d9b2-f847-4346-93b2-e7a0f7c11425 req-607ef584-0e90-4754-a4dd-13cce3150145 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:49:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 71 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 2.1 MiB/s wr, 170 op/s
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.394 239942 DEBUG nova.network.neutron [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:13 np0005603435 nova_compute[239938]: 2026-01-31 04:49:13.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.306 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.920 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.920 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.920 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.921 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:49:14 np0005603435 nova_compute[239938]: 2026-01-31 04:49:14.921 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.016 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.031 239942 DEBUG nova.network.neutron [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Updating instance_info_cache with network_info: [{"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.054 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Releasing lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.055 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Instance network_info: |[{"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.056 239942 DEBUG oslo_concurrency.lockutils [req-6a80d9b2-f847-4346-93b2-e7a0f7c11425 req-607ef584-0e90-4754-a4dd-13cce3150145 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.057 239942 DEBUG nova.network.neutron [req-6a80d9b2-f847-4346-93b2-e7a0f7c11425 req-607ef584-0e90-4754-a4dd-13cce3150145 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Refreshing network info cache for port 84cc8fc9-7d52-4528-bad3-524644ec103e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.063 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Start _get_guest_xml network_info=[{"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.071 239942 WARNING nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.082 239942 DEBUG nova.virt.libvirt.host [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.083 239942 DEBUG nova.virt.libvirt.host [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.088 239942 DEBUG nova.virt.libvirt.host [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.089 239942 DEBUG nova.virt.libvirt.host [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.089 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.090 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.091 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.091 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.092 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.092 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.093 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.093 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.094 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.094 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.095 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.095 239942 DEBUG nova.virt.hardware [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.100 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 2.7 MiB/s wr, 143 op/s
Jan 30 23:49:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:49:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242719621' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.471 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.606 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.607 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4535MB free_disk=59.97505155671388GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.608 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.608 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167468342' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.684 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 3dfd6853-c0e1-446c-9f5d-097c8af910db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.684 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.684 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.687 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.708 239942 DEBUG nova.storage.rbd_utils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.712 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:15 np0005603435 nova_compute[239938]: 2026-01-31 04:49:15.743 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021009101' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.257 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.258 239942 DEBUG nova.virt.libvirt.vif [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:49:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1643995663',display_name='tempest-VolumesBackupsTest-instance-1643995663',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1643995663',id=8,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7M3Kx9HnlgPwJ3q2vcmLgKtbzv68YcGrJnBcWrW+oC+Lbh28Jv7i2/KnMnVyUUAQ/VX5n+Z+i0mqfZMAcVOh2jZJeWGMs9dMkYG6AFIpYg7M6nh0Y89qdXxvTNQOiLIg==',key_name='tempest-keypair-377641076',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b8b11aff4b494f4eb1376cfe5754bac8',ramdisk_id='',reservation_id='r-53r2xod5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1503004541',owner_user_name='tempest-VolumesBackupsTest-1503004541-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:49:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f51271330a6d46498b473f0d2595c3ac',uuid=3dfd6853-c0e1-446c-9f5d-097c8af910db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.259 239942 DEBUG nova.network.os_vif_util [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converting VIF {"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.260 239942 DEBUG nova.network.os_vif_util [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:71:ab,bridge_name='br-int',has_traffic_filtering=True,id=84cc8fc9-7d52-4528-bad3-524644ec103e,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84cc8fc9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.261 239942 DEBUG nova.objects.instance [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3dfd6853-c0e1-446c-9f5d-097c8af910db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:49:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1688703147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.279 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <uuid>3dfd6853-c0e1-446c-9f5d-097c8af910db</uuid>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <name>instance-00000008</name>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesBackupsTest-instance-1643995663</nova:name>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:49:15</nova:creationTime>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <nova:user uuid="f51271330a6d46498b473f0d2595c3ac">tempest-VolumesBackupsTest-1503004541-project-member</nova:user>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <nova:project uuid="b8b11aff4b494f4eb1376cfe5754bac8">tempest-VolumesBackupsTest-1503004541</nova:project>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <nova:port uuid="84cc8fc9-7d52-4528-bad3-524644ec103e">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <entry name="serial">3dfd6853-c0e1-446c-9f5d-097c8af910db</entry>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <entry name="uuid">3dfd6853-c0e1-446c-9f5d-097c8af910db</entry>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/3dfd6853-c0e1-446c-9f5d-097c8af910db_disk">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/3dfd6853-c0e1-446c-9f5d-097c8af910db_disk.config">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:30:71:ab"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <target dev="tap84cc8fc9-7d"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db/console.log" append="off"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:49:16 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:49:16 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:49:16 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:49:16 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.280 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Preparing to wait for external event network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.280 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.281 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.281 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.281 239942 DEBUG nova.virt.libvirt.vif [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:49:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1643995663',display_name='tempest-VolumesBackupsTest-instance-1643995663',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1643995663',id=8,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7M3Kx9HnlgPwJ3q2vcmLgKtbzv68YcGrJnBcWrW+oC+Lbh28Jv7i2/KnMnVyUUAQ/VX5n+Z+i0mqfZMAcVOh2jZJeWGMs9dMkYG6AFIpYg7M6nh0Y89qdXxvTNQOiLIg==',key_name='tempest-keypair-377641076',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b8b11aff4b494f4eb1376cfe5754bac8',ramdisk_id='',reservation_id='r-53r2xod5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1503004541',owner_user_name='tempest-VolumesBackupsTest-1503004541-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:49:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f51271330a6d46498b473f0d2595c3ac',uuid=3dfd6853-c0e1-446c-9f5d-097c8af910db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.282 239942 DEBUG nova.network.os_vif_util [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converting VIF {"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.282 239942 DEBUG nova.network.os_vif_util [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:71:ab,bridge_name='br-int',has_traffic_filtering=True,id=84cc8fc9-7d52-4528-bad3-524644ec103e,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84cc8fc9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.282 239942 DEBUG os_vif [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:71:ab,bridge_name='br-int',has_traffic_filtering=True,id=84cc8fc9-7d52-4528-bad3-524644ec103e,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84cc8fc9-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.283 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.283 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.283 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.287 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.287 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84cc8fc9-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.287 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84cc8fc9-7d, col_values=(('external_ids', {'iface-id': '84cc8fc9-7d52-4528-bad3-524644ec103e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:71:ab', 'vm-uuid': '3dfd6853-c0e1-446c-9f5d-097c8af910db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:16 np0005603435 NetworkManager[49097]: <info>  [1769834956.2902] manager: (tap84cc8fc9-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.291 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.294 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.295 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.297 239942 INFO os_vif [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:71:ab,bridge_name='br-int',has_traffic_filtering=True,id=84cc8fc9-7d52-4528-bad3-524644ec103e,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84cc8fc9-7d')#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.303 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.326 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.366 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.366 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.367 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No VIF found with MAC fa:16:3e:30:71:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.367 239942 INFO nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Using config drive#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.395 239942 DEBUG nova.storage.rbd_utils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.408 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.408 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.530 239942 DEBUG nova.network.neutron [req-6a80d9b2-f847-4346-93b2-e7a0f7c11425 req-607ef584-0e90-4754-a4dd-13cce3150145 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Updated VIF entry in instance network info cache for port 84cc8fc9-7d52-4528-bad3-524644ec103e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.531 239942 DEBUG nova.network.neutron [req-6a80d9b2-f847-4346-93b2-e7a0f7c11425 req-607ef584-0e90-4754-a4dd-13cce3150145 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Updating instance_info_cache with network_info: [{"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.550 239942 DEBUG oslo_concurrency.lockutils [req-6a80d9b2-f847-4346-93b2-e7a0f7c11425 req-607ef584-0e90-4754-a4dd-13cce3150145 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:49:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 30 23:49:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 30 23:49:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.772 239942 INFO nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Creating config drive at /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db/disk.config#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.780 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9eyua89m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.908 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9eyua89m" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.945 239942 DEBUG nova.storage.rbd_utils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] rbd image 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:16 np0005603435 nova_compute[239938]: 2026-01-31 04:49:16.949 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db/disk.config 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.100 239942 DEBUG oslo_concurrency.processutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db/disk.config 3dfd6853-c0e1-446c-9f5d-097c8af910db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.102 239942 INFO nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Deleting local config drive /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db/disk.config because it was imported into RBD.#033[00m
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:49:17 np0005603435 kernel: tap84cc8fc9-7d: entered promiscuous mode
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034707979104276236 of space, bias 1.0, pg target 0.10412393731282871 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.793334762216538e-06 of space, bias 1.0, pg target 0.0011380004286649613 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 5.537794306539516e-07 of space, bias 1.0, pg target 0.00016613382919618549 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664437211226647 of space, bias 1.0, pg target 0.19993311633679942 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.970137740016768e-07 of space, bias 4.0, pg target 0.0008364165288020121 quantized to 16 (current 16)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 NetworkManager[49097]: <info>  [1769834957.1596] manager: (tap84cc8fc9-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Jan 30 23:49:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:17Z|00080|binding|INFO|Claiming lport 84cc8fc9-7d52-4528-bad3-524644ec103e for this chassis.
Jan 30 23:49:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:17Z|00081|binding|INFO|84cc8fc9-7d52-4528-bad3-524644ec103e: Claiming fa:16:3e:30:71:ab 10.100.0.11
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.162 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.179 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:71:ab 10.100.0.11'], port_security=['fa:16:3e:30:71:ab 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3dfd6853-c0e1-446c-9f5d-097c8af910db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b11aff4b494f4eb1376cfe5754bac8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2c73b112-e396-4240-808c-5bf45e432461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c4453b0-f040-4fe4-88f1-8a0ec8ff54c7, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=84cc8fc9-7d52-4528-bad3-524644ec103e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.181 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 84cc8fc9-7d52-4528-bad3-524644ec103e in datapath 28e37664-8d81-4a45-8e12-f0b45b43b4cf bound to our chassis#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.184 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28e37664-8d81-4a45-8e12-f0b45b43b4cf#033[00m
Jan 30 23:49:17 np0005603435 systemd-machined[208030]: New machine qemu-8-instance-00000008.
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.195 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6c48a2f5-419f-4e37-98b6-6209cd1983a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.196 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28e37664-81 in ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.198 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28e37664-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.199 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[741be91b-b32c-4e6a-8710-cd1a6a5d65f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.200 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a265edc8-1dd7-4aa9-9c20-e3827b889a13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.209 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[1108a5a9-ccdc-4131-9843-77267e4291ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.216 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:17Z|00082|binding|INFO|Setting lport 84cc8fc9-7d52-4528-bad3-524644ec103e ovn-installed in OVS
Jan 30 23:49:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:17Z|00083|binding|INFO|Setting lport 84cc8fc9-7d52-4528-bad3-524644ec103e up in Southbound
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.222 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[977861c8-f0de-4f23-aae2-4d87c5ed3f5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.224 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:17 np0005603435 systemd-udevd[254114]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.254 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[131708c7-a162-4801-8680-d713944675e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 NetworkManager[49097]: <info>  [1769834957.2573] device (tap84cc8fc9-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:49:17 np0005603435 NetworkManager[49097]: <info>  [1769834957.2586] device (tap84cc8fc9-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.260 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[593bc836-9b4b-45a7-9ee2-0843580a10ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 NetworkManager[49097]: <info>  [1769834957.2621] manager: (tap28e37664-80): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 3.3 MiB/s wr, 135 op/s
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.297 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[6e917e4c-cab0-4fce-bad0-f8b88edf563a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.301 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[01f3b44f-33da-4b2f-a7f2-f2117edfaa84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 NetworkManager[49097]: <info>  [1769834957.3211] device (tap28e37664-80): carrier: link connected
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.325 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[76624ff9-0f6f-4651-97d7-fd4fb9f8d902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.339 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[97830672-e73d-4cac-a767-4ef9476db5fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28e37664-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:46:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401172, 'reachable_time': 22389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254143, 'error': None, 'target': 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.351 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7bea32c1-59d5-4053-accb-fef4f4fbf37f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:46c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 401172, 'tstamp': 401172}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254144, 'error': None, 'target': 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.364 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d88d57a6-dd6d-4268-9adc-16f41b9805ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28e37664-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:46:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401172, 'reachable_time': 22389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254145, 'error': None, 'target': 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.390 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[87593be6-f0a2-47f4-bc08-24c6f4f6267e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.412 239942 DEBUG nova.compute.manager [req-b5ad9418-86d7-4114-8081-ffc00cd6e1a3 req-f3573e9b-c2e3-407c-814b-891e3db0f5a6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received event network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.413 239942 DEBUG oslo_concurrency.lockutils [req-b5ad9418-86d7-4114-8081-ffc00cd6e1a3 req-f3573e9b-c2e3-407c-814b-891e3db0f5a6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.414 239942 DEBUG oslo_concurrency.lockutils [req-b5ad9418-86d7-4114-8081-ffc00cd6e1a3 req-f3573e9b-c2e3-407c-814b-891e3db0f5a6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.414 239942 DEBUG oslo_concurrency.lockutils [req-b5ad9418-86d7-4114-8081-ffc00cd6e1a3 req-f3573e9b-c2e3-407c-814b-891e3db0f5a6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.414 239942 DEBUG nova.compute.manager [req-b5ad9418-86d7-4114-8081-ffc00cd6e1a3 req-f3573e9b-c2e3-407c-814b-891e3db0f5a6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Processing event network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.440 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1b04af-fdbb-479f-9cf0-34cd009da65e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.441 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28e37664-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.442 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.442 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28e37664-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.444 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:17 np0005603435 NetworkManager[49097]: <info>  [1769834957.4449] manager: (tap28e37664-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 30 23:49:17 np0005603435 kernel: tap28e37664-80: entered promiscuous mode
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.447 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.448 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28e37664-80, col_values=(('external_ids', {'iface-id': '17a6f891-9bce-4b37-a6eb-eb44f21f3bd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.449 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:17Z|00084|binding|INFO|Releasing lport 17a6f891-9bce-4b37-a6eb-eb44f21f3bd7 from this chassis (sb_readonly=0)
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.461 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.462 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28e37664-8d81-4a45-8e12-f0b45b43b4cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28e37664-8d81-4a45-8e12-f0b45b43b4cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.463 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4a412a46-976a-4739-8727-e50ec009179f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.464 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-28e37664-8d81-4a45-8e12-f0b45b43b4cf
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/28e37664-8d81-4a45-8e12-f0b45b43b4cf.pid.haproxy
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 28e37664-8d81-4a45-8e12-f0b45b43b4cf
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:49:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:17.465 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'env', 'PROCESS_TAG=haproxy-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28e37664-8d81-4a45-8e12-f0b45b43b4cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.672 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.672 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834957.6724756, 3dfd6853-c0e1-446c-9f5d-097c8af910db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.673 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] VM Started (Lifecycle Event)#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.676 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.680 239942 INFO nova.virt.libvirt.driver [-] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Instance spawned successfully.#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.681 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.698 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.702 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.705 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.706 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.706 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.706 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.707 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.707 239942 DEBUG nova.virt.libvirt.driver [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.736 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.736 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834957.6725764, 3dfd6853-c0e1-446c-9f5d-097c8af910db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.736 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.761 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.766 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834957.6759624, 3dfd6853-c0e1-446c-9f5d-097c8af910db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.767 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.772 239942 INFO nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Took 7.74 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.772 239942 DEBUG nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.783 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.787 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.807 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.846 239942 INFO nova.compute.manager [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Took 8.79 seconds to build instance.#033[00m
Jan 30 23:49:17 np0005603435 nova_compute[239938]: 2026-01-31 04:49:17.861 239942 DEBUG oslo_concurrency.lockutils [None req-329b346f-baef-4525-940f-f5d8385937fc f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:17 np0005603435 podman[254218]: 2026-01-31 04:49:17.864480419 +0000 UTC m=+0.061720615 container create 53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 30 23:49:17 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] Check health
Jan 30 23:49:17 np0005603435 podman[254218]: 2026-01-31 04:49:17.833098404 +0000 UTC m=+0.030338610 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:49:17 np0005603435 systemd[1]: Started libpod-conmon-53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0.scope.
Jan 30 23:49:17 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:49:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad12913ffc73102d2ac6a316b02a6e339e24c41bf35486f80b4621a66c223c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:18 np0005603435 podman[254218]: 2026-01-31 04:49:18.083177648 +0000 UTC m=+0.280417854 container init 53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:49:18 np0005603435 podman[254218]: 2026-01-31 04:49:18.092345206 +0000 UTC m=+0.289585412 container start 53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 30 23:49:18 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[254233]: [NOTICE]   (254237) : New worker (254239) forked
Jan 30 23:49:18 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[254233]: [NOTICE]   (254237) : Loading success.
Jan 30 23:49:18 np0005603435 nova_compute[239938]: 2026-01-31 04:49:18.410 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 109 op/s
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.310 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2280359895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:19 np0005603435 NetworkManager[49097]: <info>  [1769834959.4042] manager: (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.403 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:19 np0005603435 NetworkManager[49097]: <info>  [1769834959.4058] manager: (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.488 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:19 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:19Z|00085|binding|INFO|Releasing lport 17a6f891-9bce-4b37-a6eb-eb44f21f3bd7 from this chassis (sb_readonly=0)
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.506 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.549 239942 DEBUG nova.compute.manager [req-46709efc-8cd6-449e-a747-ffabf28446b7 req-ab5a7dd5-2914-40e5-854f-12f6eb3a45a5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received event network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.550 239942 DEBUG oslo_concurrency.lockutils [req-46709efc-8cd6-449e-a747-ffabf28446b7 req-ab5a7dd5-2914-40e5-854f-12f6eb3a45a5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.551 239942 DEBUG oslo_concurrency.lockutils [req-46709efc-8cd6-449e-a747-ffabf28446b7 req-ab5a7dd5-2914-40e5-854f-12f6eb3a45a5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.551 239942 DEBUG oslo_concurrency.lockutils [req-46709efc-8cd6-449e-a747-ffabf28446b7 req-ab5a7dd5-2914-40e5-854f-12f6eb3a45a5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.552 239942 DEBUG nova.compute.manager [req-46709efc-8cd6-449e-a747-ffabf28446b7 req-ab5a7dd5-2914-40e5-854f-12f6eb3a45a5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] No waiting events found dispatching network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.552 239942 WARNING nova.compute.manager [req-46709efc-8cd6-449e-a747-ffabf28446b7 req-ab5a7dd5-2914-40e5-854f-12f6eb3a45a5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received unexpected event network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e for instance with vm_state active and task_state None.#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.970 239942 DEBUG nova.compute.manager [req-65c3c289-7b91-4a7a-999d-1c3d90a1f85f req-1bd62c55-f1ec-47ce-b999-4db8e9e6ca87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received event network-changed-84cc8fc9-7d52-4528-bad3-524644ec103e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.971 239942 DEBUG nova.compute.manager [req-65c3c289-7b91-4a7a-999d-1c3d90a1f85f req-1bd62c55-f1ec-47ce-b999-4db8e9e6ca87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Refreshing instance network info cache due to event network-changed-84cc8fc9-7d52-4528-bad3-524644ec103e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.972 239942 DEBUG oslo_concurrency.lockutils [req-65c3c289-7b91-4a7a-999d-1c3d90a1f85f req-1bd62c55-f1ec-47ce-b999-4db8e9e6ca87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.972 239942 DEBUG oslo_concurrency.lockutils [req-65c3c289-7b91-4a7a-999d-1c3d90a1f85f req-1bd62c55-f1ec-47ce-b999-4db8e9e6ca87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:49:19 np0005603435 nova_compute[239938]: 2026-01-31 04:49:19.973 239942 DEBUG nova.network.neutron [req-65c3c289-7b91-4a7a-999d-1c3d90a1f85f req-1bd62c55-f1ec-47ce-b999-4db8e9e6ca87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Refreshing network info cache for port 84cc8fc9-7d52-4528-bad3-524644ec103e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:49:20 np0005603435 podman[254249]: 2026-01-31 04:49:20.072943602 +0000 UTC m=+0.043548594 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 30 23:49:20 np0005603435 podman[254250]: 2026-01-31 04:49:20.139801099 +0000 UTC m=+0.108695360 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 30 23:49:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 30 23:49:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 30 23:49:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 30 23:49:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1016 KiB/s wr, 49 op/s
Jan 30 23:49:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 30 23:49:21 np0005603435 nova_compute[239938]: 2026-01-31 04:49:21.337 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 30 23:49:21 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 30 23:49:21 np0005603435 nova_compute[239938]: 2026-01-31 04:49:21.421 239942 DEBUG nova.network.neutron [req-65c3c289-7b91-4a7a-999d-1c3d90a1f85f req-1bd62c55-f1ec-47ce-b999-4db8e9e6ca87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Updated VIF entry in instance network info cache for port 84cc8fc9-7d52-4528-bad3-524644ec103e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:49:21 np0005603435 nova_compute[239938]: 2026-01-31 04:49:21.421 239942 DEBUG nova.network.neutron [req-65c3c289-7b91-4a7a-999d-1c3d90a1f85f req-1bd62c55-f1ec-47ce-b999-4db8e9e6ca87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Updating instance_info_cache with network_info: [{"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:49:21 np0005603435 nova_compute[239938]: 2026-01-31 04:49:21.441 239942 DEBUG oslo_concurrency.lockutils [req-65c3c289-7b91-4a7a-999d-1c3d90a1f85f req-1bd62c55-f1ec-47ce-b999-4db8e9e6ca87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-3dfd6853-c0e1-446c-9f5d-097c8af910db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:49:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:22 np0005603435 nova_compute[239938]: 2026-01-31 04:49:22.625 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:22 np0005603435 nova_compute[239938]: 2026-01-31 04:49:22.626 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:22 np0005603435 nova_compute[239938]: 2026-01-31 04:49:22.648 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:49:22 np0005603435 nova_compute[239938]: 2026-01-31 04:49:22.733 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:22 np0005603435 nova_compute[239938]: 2026-01-31 04:49:22.734 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:22 np0005603435 nova_compute[239938]: 2026-01-31 04:49:22.740 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:49:22 np0005603435 nova_compute[239938]: 2026-01-31 04:49:22.740 239942 INFO nova.compute.claims [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:49:22 np0005603435 nova_compute[239938]: 2026-01-31 04:49:22.875 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 88 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 28 KiB/s wr, 153 op/s
Jan 30 23:49:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:49:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1957150509' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.410 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.417 239942 DEBUG nova.compute.provider_tree [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.435 239942 DEBUG nova.scheduler.client.report [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.461 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.462 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.524 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.524 239942 DEBUG nova.network.neutron [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.554 239942 INFO nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.579 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.690 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.692 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.693 239942 INFO nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Creating image(s)#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.724 239942 DEBUG nova.storage.rbd_utils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] rbd image c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.763 239942 DEBUG nova.storage.rbd_utils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] rbd image c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.794 239942 DEBUG nova.storage.rbd_utils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] rbd image c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.801 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.822 239942 DEBUG nova.policy [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fc009f2d3a86499c9b2b11e334162e5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd42316247b96450c9011d2b8cc7fbaaf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.875 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.876 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.877 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.878 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.907 239942 DEBUG nova.storage.rbd_utils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] rbd image c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:23 np0005603435 nova_compute[239938]: 2026-01-31 04:49:23.912 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.184 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.259 239942 DEBUG nova.storage.rbd_utils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] resizing rbd image c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.310 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.349 239942 DEBUG nova.objects.instance [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lazy-loading 'migration_context' on Instance uuid c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.366 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.367 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Ensure instance console log exists: /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:49:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.367 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.368 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.368 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 30 23:49:24 np0005603435 nova_compute[239938]: 2026-01-31 04:49:24.820 239942 DEBUG nova.network.neutron [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Successfully created port: 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:49:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 105 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 191 op/s
Jan 30 23:49:25 np0005603435 nova_compute[239938]: 2026-01-31 04:49:25.464 239942 DEBUG nova.network.neutron [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Successfully updated port: 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:49:25 np0005603435 nova_compute[239938]: 2026-01-31 04:49:25.482 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:49:25 np0005603435 nova_compute[239938]: 2026-01-31 04:49:25.482 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquired lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:49:25 np0005603435 nova_compute[239938]: 2026-01-31 04:49:25.482 239942 DEBUG nova.network.neutron [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:49:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2365689014' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2365689014' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:25 np0005603435 nova_compute[239938]: 2026-01-31 04:49:25.580 239942 DEBUG nova.compute.manager [req-26600ab7-d96a-4923-bd0b-10531c1b24a0 req-59547ff9-6575-4ec2-b8fe-9f573aa96ba8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received event network-changed-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:25 np0005603435 nova_compute[239938]: 2026-01-31 04:49:25.581 239942 DEBUG nova.compute.manager [req-26600ab7-d96a-4923-bd0b-10531c1b24a0 req-59547ff9-6575-4ec2-b8fe-9f573aa96ba8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Refreshing instance network info cache due to event network-changed-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:49:25 np0005603435 nova_compute[239938]: 2026-01-31 04:49:25.581 239942 DEBUG oslo_concurrency.lockutils [req-26600ab7-d96a-4923-bd0b-10531c1b24a0 req-59547ff9-6575-4ec2-b8fe-9f573aa96ba8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:49:25 np0005603435 nova_compute[239938]: 2026-01-31 04:49:25.867 239942 DEBUG nova.network.neutron [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.344 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.948 239942 DEBUG nova.network.neutron [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Updating instance_info_cache with network_info: [{"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.971 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Releasing lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.971 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Instance network_info: |[{"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.972 239942 DEBUG oslo_concurrency.lockutils [req-26600ab7-d96a-4923-bd0b-10531c1b24a0 req-59547ff9-6575-4ec2-b8fe-9f573aa96ba8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.972 239942 DEBUG nova.network.neutron [req-26600ab7-d96a-4923-bd0b-10531c1b24a0 req-59547ff9-6575-4ec2-b8fe-9f573aa96ba8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Refreshing network info cache for port 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.975 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Start _get_guest_xml network_info=[{"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.981 239942 WARNING nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.986 239942 DEBUG nova.virt.libvirt.host [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.987 239942 DEBUG nova.virt.libvirt.host [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.990 239942 DEBUG nova.virt.libvirt.host [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.990 239942 DEBUG nova.virt.libvirt.host [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.991 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.991 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.992 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.992 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.993 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.993 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.993 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.994 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.994 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.994 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.995 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.995 239942 DEBUG nova.virt.hardware [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:49:26 np0005603435 nova_compute[239938]: 2026-01-31 04:49:26.998 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/418595826' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/418595826' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 216 op/s
Jan 30 23:49:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463904456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:27 np0005603435 nova_compute[239938]: 2026-01-31 04:49:27.528 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:27 np0005603435 nova_compute[239938]: 2026-01-31 04:49:27.561 239942 DEBUG nova.storage.rbd_utils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] rbd image c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:27 np0005603435 nova_compute[239938]: 2026-01-31 04:49:27.568 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839988923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.098 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.100 239942 DEBUG nova.virt.libvirt.vif [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-853286388',display_name='tempest-TestEncryptedCinderVolumes-server-853286388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-853286388',id=9,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVBpGbwYpcpfXjxsvtSvcDeT1FaISLwAZ/9BgPFKHGAh9eJ4D0UzsZ5XjlQ3ptcgSI3XCOH89tmHw33jJzCXQ/Onlpic/eBRza/Vw1bmd5yR4SNgC+7T6jusqef58/Bcg==',key_name='tempest-keypair-1863742156',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d42316247b96450c9011d2b8cc7fbaaf',ramdisk_id='',reservation_id='r-qr6fran1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1286074295',owner_user_name='tempest-TestEncryptedCinderVolumes-1286074295-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:49:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc009f2d3a86499c9b2b11e334162e5e',uuid=c7e02002-03b8-47f5-b10e-39a5dfa4e4d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.101 239942 DEBUG nova.network.os_vif_util [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Converting VIF {"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.102 239942 DEBUG nova.network.os_vif_util [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a,network=Network(475d6d28-d627-470b-bb8c-79572c246996),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8336bfaa-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.104 239942 DEBUG nova.objects.instance [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lazy-loading 'pci_devices' on Instance uuid c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.107 239942 DEBUG nova.network.neutron [req-26600ab7-d96a-4923-bd0b-10531c1b24a0 req-59547ff9-6575-4ec2-b8fe-9f573aa96ba8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Updated VIF entry in instance network info cache for port 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.107 239942 DEBUG nova.network.neutron [req-26600ab7-d96a-4923-bd0b-10531c1b24a0 req-59547ff9-6575-4ec2-b8fe-9f573aa96ba8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Updating instance_info_cache with network_info: [{"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.128 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <uuid>c7e02002-03b8-47f5-b10e-39a5dfa4e4d3</uuid>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <name>instance-00000009</name>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-853286388</nova:name>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:49:26</nova:creationTime>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <nova:user uuid="fc009f2d3a86499c9b2b11e334162e5e">tempest-TestEncryptedCinderVolumes-1286074295-project-member</nova:user>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <nova:project uuid="d42316247b96450c9011d2b8cc7fbaaf">tempest-TestEncryptedCinderVolumes-1286074295</nova:project>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <nova:port uuid="8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <entry name="serial">c7e02002-03b8-47f5-b10e-39a5dfa4e4d3</entry>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <entry name="uuid">c7e02002-03b8-47f5-b10e-39a5dfa4e4d3</entry>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk.config">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:86:31:3f"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <target dev="tap8336bfaa-b5"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3/console.log" append="off"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:49:28 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:49:28 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:49:28 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:49:28 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.130 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Preparing to wait for external event network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.131 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.131 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.131 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.133 239942 DEBUG nova.virt.libvirt.vif [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-853286388',display_name='tempest-TestEncryptedCinderVolumes-server-853286388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-853286388',id=9,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVBpGbwYpcpfXjxsvtSvcDeT1FaISLwAZ/9BgPFKHGAh9eJ4D0UzsZ5XjlQ3ptcgSI3XCOH89tmHw33jJzCXQ/Onlpic/eBRza/Vw1bmd5yR4SNgC+7T6jusqef58/Bcg==',key_name='tempest-keypair-1863742156',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d42316247b96450c9011d2b8cc7fbaaf',ramdisk_id='',reservation_id='r-qr6fran1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1286074295',owner_user_name='tempest-TestEncryptedCinderVolumes-1286074295-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:49:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc009f2d3a86499c9b2b11e334162e5e',uuid=c7e02002-03b8-47f5-b10e-39a5dfa4e4d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.133 239942 DEBUG nova.network.os_vif_util [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Converting VIF {"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.134 239942 DEBUG nova.network.os_vif_util [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a,network=Network(475d6d28-d627-470b-bb8c-79572c246996),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8336bfaa-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.135 239942 DEBUG os_vif [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a,network=Network(475d6d28-d627-470b-bb8c-79572c246996),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8336bfaa-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.137 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.138 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.138 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.139 239942 DEBUG oslo_concurrency.lockutils [req-26600ab7-d96a-4923-bd0b-10531c1b24a0 req-59547ff9-6575-4ec2-b8fe-9f573aa96ba8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.143 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.143 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8336bfaa-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.144 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8336bfaa-b5, col_values=(('external_ids', {'iface-id': '8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:31:3f', 'vm-uuid': 'c7e02002-03b8-47f5-b10e-39a5dfa4e4d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.147 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:28 np0005603435 NetworkManager[49097]: <info>  [1769834968.1479] manager: (tap8336bfaa-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.152 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.154 239942 INFO os_vif [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a,network=Network(475d6d28-d627-470b-bb8c-79572c246996),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8336bfaa-b5')#033[00m
Jan 30 23:49:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/331773508' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/331773508' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.202 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.202 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.203 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] No VIF found with MAC fa:16:3e:86:31:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.204 239942 INFO nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Using config drive#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.230 239942 DEBUG nova.storage.rbd_utils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] rbd image c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.469 239942 INFO nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Creating config drive at /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3/disk.config#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.476 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_82pu1id execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.598 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_82pu1id" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.631 239942 DEBUG nova.storage.rbd_utils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] rbd image c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.635 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3/disk.config c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.786 239942 DEBUG oslo_concurrency.processutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3/disk.config c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.787 239942 INFO nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Deleting local config drive /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3/disk.config because it was imported into RBD.#033[00m
Jan 30 23:49:28 np0005603435 kernel: tap8336bfaa-b5: entered promiscuous mode
Jan 30 23:49:28 np0005603435 NetworkManager[49097]: <info>  [1769834968.8337] manager: (tap8336bfaa-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Jan 30 23:49:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:28Z|00086|binding|INFO|Claiming lport 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a for this chassis.
Jan 30 23:49:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:28Z|00087|binding|INFO|8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a: Claiming fa:16:3e:86:31:3f 10.100.0.12
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.836 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.843 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:31:3f 10.100.0.12'], port_security=['fa:16:3e:86:31:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c7e02002-03b8-47f5-b10e-39a5dfa4e4d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-475d6d28-d627-470b-bb8c-79572c246996', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd42316247b96450c9011d2b8cc7fbaaf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c74fd1da-8160-4acd-871d-1657c1321987', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7c34256-5506-4cde-bee9-321cad32dece, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.847 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a in datapath 475d6d28-d627-470b-bb8c-79572c246996 bound to our chassis#033[00m
Jan 30 23:49:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:28Z|00088|binding|INFO|Setting lport 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a ovn-installed in OVS
Jan 30 23:49:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:28Z|00089|binding|INFO|Setting lport 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a up in Southbound
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.850 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.851 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 475d6d28-d627-470b-bb8c-79572c246996#033[00m
Jan 30 23:49:28 np0005603435 nova_compute[239938]: 2026-01-31 04:49:28.853 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:28 np0005603435 systemd-machined[208030]: New machine qemu-9-instance-00000009.
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.866 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1489f5dd-cd77-47f7-90e1-082b6a7befd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.868 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap475d6d28-d1 in ovnmeta-475d6d28-d627-470b-bb8c-79572c246996 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.869 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap475d6d28-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.869 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[dac89869-5262-4bd7-8462-c0cd16a31320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.870 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3794cd-7fa3-4f09-abd5-37a7d5ec4221]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:28 np0005603435 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.880 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[488674bc-c859-4de4-8a1a-76d1f4f6b76e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:28 np0005603435 systemd-udevd[254619]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:49:28 np0005603435 NetworkManager[49097]: <info>  [1769834968.9045] device (tap8336bfaa-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.902 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb7942e-5a96-490f-96bf-2857c4dd4d61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:28 np0005603435 NetworkManager[49097]: <info>  [1769834968.9065] device (tap8336bfaa-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.936 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[acf7b5e6-a93f-4724-9481-73f03ea76048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:28 np0005603435 NetworkManager[49097]: <info>  [1769834968.9445] manager: (tap475d6d28-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.944 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2007b66f-9ff9-4a74-819b-5c819e453e96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.977 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[08d8dded-9bf8-4624-b41f-7de164b3a400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:28.981 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[efa86cc3-7234-4ade-862f-0a382af96047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:29 np0005603435 NetworkManager[49097]: <info>  [1769834969.0005] device (tap475d6d28-d0): carrier: link connected
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.004 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd6c3e8-8c20-4ace-a216-15cd2864c96d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.016 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[eebab7fc-2d5e-4d8e-8dc6-e40570d209f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap475d6d28-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:44:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402340, 'reachable_time': 43111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254651, 'error': None, 'target': 'ovnmeta-475d6d28-d627-470b-bb8c-79572c246996', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.031 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd14fbd-4046-4471-a488-88f7c6d34964]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:44a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402340, 'tstamp': 402340}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254652, 'error': None, 'target': 'ovnmeta-475d6d28-d627-470b-bb8c-79572c246996', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.047 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8390cd85-1419-441d-bd75-b56041556552]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap475d6d28-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:44:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402340, 'reachable_time': 43111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254653, 'error': None, 'target': 'ovnmeta-475d6d28-d627-470b-bb8c-79572c246996', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.056 239942 DEBUG nova.compute.manager [req-b2219c5b-c2ab-4f74-a0d9-0db0bbb4dd96 req-7141663e-af45-45b1-aece-b888d2ed53e2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received event network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.056 239942 DEBUG oslo_concurrency.lockutils [req-b2219c5b-c2ab-4f74-a0d9-0db0bbb4dd96 req-7141663e-af45-45b1-aece-b888d2ed53e2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.057 239942 DEBUG oslo_concurrency.lockutils [req-b2219c5b-c2ab-4f74-a0d9-0db0bbb4dd96 req-7141663e-af45-45b1-aece-b888d2ed53e2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.057 239942 DEBUG oslo_concurrency.lockutils [req-b2219c5b-c2ab-4f74-a0d9-0db0bbb4dd96 req-7141663e-af45-45b1-aece-b888d2ed53e2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.058 239942 DEBUG nova.compute.manager [req-b2219c5b-c2ab-4f74-a0d9-0db0bbb4dd96 req-7141663e-af45-45b1-aece-b888d2ed53e2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Processing event network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.081 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[340c66a8-2ea1-459e-97dd-fd1034034c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.137 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2969d0-1898-4d30-b446-93283012b2c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.138 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap475d6d28-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.139 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.139 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap475d6d28-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:29 np0005603435 NetworkManager[49097]: <info>  [1769834969.1415] manager: (tap475d6d28-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 30 23:49:29 np0005603435 kernel: tap475d6d28-d0: entered promiscuous mode
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.144 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap475d6d28-d0, col_values=(('external_ids', {'iface-id': '89072000-24b2-4074-9495-8478eeb7ac9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.145 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.148 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/475d6d28-d627-470b-bb8c-79572c246996.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/475d6d28-d627-470b-bb8c-79572c246996.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.149 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5822fc4f-e562-490f-9a1e-350e401a6fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.150 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-475d6d28-d627-470b-bb8c-79572c246996
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/475d6d28-d627-470b-bb8c-79572c246996.pid.haproxy
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 475d6d28-d627-470b-bb8c-79572c246996
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:49:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:29.151 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-475d6d28-d627-470b-bb8c-79572c246996', 'env', 'PROCESS_TAG=haproxy-475d6d28-d627-470b-bb8c-79572c246996', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/475d6d28-d627-470b-bb8c-79572c246996.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:49:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:29Z|00090|binding|INFO|Releasing lport 89072000-24b2-4074-9495-8478eeb7ac9c from this chassis (sb_readonly=0)
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.163 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 187 op/s
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.311 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1028615860' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1028615860' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:29 np0005603435 podman[254700]: 2026-01-31 04:49:29.485372221 +0000 UTC m=+0.051400701 container create fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:49:29 np0005603435 systemd[1]: Started libpod-conmon-fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3.scope.
Jan 30 23:49:29 np0005603435 podman[254700]: 2026-01-31 04:49:29.453504155 +0000 UTC m=+0.019532635 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:49:29 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:49:29 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3dc1d73fa9ad0afa3c4957ab7f6aaa1f4e4968192819590257ecc329ec82a28/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:49:29 np0005603435 podman[254700]: 2026-01-31 04:49:29.566734282 +0000 UTC m=+0.132762742 container init fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.570 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834969.570385, c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.571 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] VM Started (Lifecycle Event)#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.573 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:49:29 np0005603435 podman[254700]: 2026-01-31 04:49:29.57424723 +0000 UTC m=+0.140275670 container start fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.578 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.581 239942 INFO nova.virt.libvirt.driver [-] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Instance spawned successfully.#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.581 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.593 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.598 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.602 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.603 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.603 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.603 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.604 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.604 239942 DEBUG nova.virt.libvirt.driver [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:49:29 np0005603435 neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996[254739]: [NOTICE]   (254744) : New worker (254746) forked
Jan 30 23:49:29 np0005603435 neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996[254739]: [NOTICE]   (254744) : Loading success.
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.624 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.624 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834969.57059, c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.624 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.649 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.651 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769834969.5780437, c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.651 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.672 239942 INFO nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Took 5.98 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.672 239942 DEBUG nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.673 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.678 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.713 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.738 239942 INFO nova.compute.manager [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Took 7.03 seconds to build instance.#033[00m
Jan 30 23:49:29 np0005603435 nova_compute[239938]: 2026-01-31 04:49:29.752 239942 DEBUG oslo_concurrency.lockutils [None req-b3dde78f-bd4a-4d30-8c9f-484ba3d96193 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:29Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:71:ab 10.100.0.11
Jan 30 23:49:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:29Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:71:ab 10.100.0.11
Jan 30 23:49:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233571779' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233571779' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:31 np0005603435 nova_compute[239938]: 2026-01-31 04:49:31.159 239942 DEBUG nova.compute.manager [req-1248180d-bef1-4ba9-a3a6-8f12fbb07723 req-50ffdc5c-23ad-4450-841e-70c9491fe4d5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received event network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:31 np0005603435 nova_compute[239938]: 2026-01-31 04:49:31.159 239942 DEBUG oslo_concurrency.lockutils [req-1248180d-bef1-4ba9-a3a6-8f12fbb07723 req-50ffdc5c-23ad-4450-841e-70c9491fe4d5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:31 np0005603435 nova_compute[239938]: 2026-01-31 04:49:31.160 239942 DEBUG oslo_concurrency.lockutils [req-1248180d-bef1-4ba9-a3a6-8f12fbb07723 req-50ffdc5c-23ad-4450-841e-70c9491fe4d5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:31 np0005603435 nova_compute[239938]: 2026-01-31 04:49:31.160 239942 DEBUG oslo_concurrency.lockutils [req-1248180d-bef1-4ba9-a3a6-8f12fbb07723 req-50ffdc5c-23ad-4450-841e-70c9491fe4d5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:31 np0005603435 nova_compute[239938]: 2026-01-31 04:49:31.160 239942 DEBUG nova.compute.manager [req-1248180d-bef1-4ba9-a3a6-8f12fbb07723 req-50ffdc5c-23ad-4450-841e-70c9491fe4d5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] No waiting events found dispatching network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:49:31 np0005603435 nova_compute[239938]: 2026-01-31 04:49:31.161 239942 WARNING nova.compute.manager [req-1248180d-bef1-4ba9-a3a6-8f12fbb07723 req-50ffdc5c-23ad-4450-841e-70c9491fe4d5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received unexpected event network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a for instance with vm_state active and task_state None.#033[00m
Jan 30 23:49:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 143 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 217 op/s
Jan 30 23:49:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 30 23:49:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 30 23:49:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 30 23:49:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1254203627' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1254203627' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:32 np0005603435 nova_compute[239938]: 2026-01-31 04:49:32.508 239942 DEBUG nova.compute.manager [req-8f8ed827-865f-4f06-a909-c092048de316 req-6db95037-c880-4ede-af27-cdbfd5a3eaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received event network-changed-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:32 np0005603435 nova_compute[239938]: 2026-01-31 04:49:32.509 239942 DEBUG nova.compute.manager [req-8f8ed827-865f-4f06-a909-c092048de316 req-6db95037-c880-4ede-af27-cdbfd5a3eaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Refreshing instance network info cache due to event network-changed-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:49:32 np0005603435 nova_compute[239938]: 2026-01-31 04:49:32.509 239942 DEBUG oslo_concurrency.lockutils [req-8f8ed827-865f-4f06-a909-c092048de316 req-6db95037-c880-4ede-af27-cdbfd5a3eaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:49:32 np0005603435 nova_compute[239938]: 2026-01-31 04:49:32.510 239942 DEBUG oslo_concurrency.lockutils [req-8f8ed827-865f-4f06-a909-c092048de316 req-6db95037-c880-4ede-af27-cdbfd5a3eaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:49:32 np0005603435 nova_compute[239938]: 2026-01-31 04:49:32.510 239942 DEBUG nova.network.neutron [req-8f8ed827-865f-4f06-a909-c092048de316 req-6db95037-c880-4ede-af27-cdbfd5a3eaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Refreshing network info cache for port 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:49:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/118242921' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/118242921' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:33 np0005603435 nova_compute[239938]: 2026-01-31 04:49:33.148 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 346 op/s
Jan 30 23:49:33 np0005603435 nova_compute[239938]: 2026-01-31 04:49:33.721 239942 DEBUG nova.network.neutron [req-8f8ed827-865f-4f06-a909-c092048de316 req-6db95037-c880-4ede-af27-cdbfd5a3eaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Updated VIF entry in instance network info cache for port 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:49:33 np0005603435 nova_compute[239938]: 2026-01-31 04:49:33.722 239942 DEBUG nova.network.neutron [req-8f8ed827-865f-4f06-a909-c092048de316 req-6db95037-c880-4ede-af27-cdbfd5a3eaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Updating instance_info_cache with network_info: [{"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:49:33 np0005603435 nova_compute[239938]: 2026-01-31 04:49:33.745 239942 DEBUG oslo_concurrency.lockutils [req-8f8ed827-865f-4f06-a909-c092048de316 req-6db95037-c880-4ede-af27-cdbfd5a3eaae c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:49:34 np0005603435 nova_compute[239938]: 2026-01-31 04:49:34.314 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 324 op/s
Jan 30 23:49:35 np0005603435 nova_compute[239938]: 2026-01-31 04:49:35.793 239942 DEBUG oslo_concurrency.lockutils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:35 np0005603435 nova_compute[239938]: 2026-01-31 04:49:35.794 239942 DEBUG oslo_concurrency.lockutils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:35 np0005603435 nova_compute[239938]: 2026-01-31 04:49:35.808 239942 DEBUG nova.objects.instance [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'flavor' on Instance uuid 3dfd6853-c0e1-446c-9f5d-097c8af910db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:35 np0005603435 nova_compute[239938]: 2026-01-31 04:49:35.826 239942 INFO nova.virt.libvirt.driver [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Ignoring supplied device name: /dev/vdb#033[00m
Jan 30 23:49:35 np0005603435 nova_compute[239938]: 2026-01-31 04:49:35.843 239942 DEBUG oslo_concurrency.lockutils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.043 239942 DEBUG oslo_concurrency.lockutils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.044 239942 DEBUG oslo_concurrency.lockutils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.044 239942 INFO nova.compute.manager [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Attaching volume 2d8a5544-500e-424a-b08b-a486887dcd73 to /dev/vdb#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.183 239942 DEBUG os_brick.utils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.184 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.195 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.195 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[bfdb8e71-0c58-40e5-ae84-5d7b97ab9837]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.196 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.202 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.203 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[30b7d54b-1a61-41b6-ad37-5df9cae22cf7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.204 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.211 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.211 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[d08fc89f-48be-477f-8f17-386875363e72]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.212 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[660e7f28-a4f3-45b9-968b-11f844531fd9]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.213 239942 DEBUG oslo_concurrency.processutils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.227 239942 DEBUG oslo_concurrency.processutils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "nvme version" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.229 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.230 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.230 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.230 239942 DEBUG os_brick.utils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] <== get_connector_properties: return (47ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.231 239942 DEBUG nova.virt.block_device [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Updating existing volume attachment record: e6e0a6c6-65b6-4683-8361-7ac71ca5d2ff _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:49:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3746700793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:49:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:49:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:49:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:49:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:49:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.955 239942 DEBUG nova.objects.instance [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'flavor' on Instance uuid 3dfd6853-c0e1-446c-9f5d-097c8af910db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.979 239942 DEBUG nova.virt.libvirt.driver [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Attempting to attach volume 2d8a5544-500e-424a-b08b-a486887dcd73 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:49:36 np0005603435 nova_compute[239938]: 2026-01-31 04:49:36.982 239942 DEBUG nova.virt.libvirt.guest [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:49:36 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:49:36 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-2d8a5544-500e-424a-b08b-a486887dcd73">
Jan 30 23:49:36 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:36 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:49:36 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:49:36 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:49:36 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:49:36 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:49:36 np0005603435 nova_compute[239938]:  <serial>2d8a5544-500e-424a-b08b-a486887dcd73</serial>
Jan 30 23:49:36 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:49:36 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:49:37 np0005603435 nova_compute[239938]: 2026-01-31 04:49:37.096 239942 DEBUG nova.virt.libvirt.driver [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:37 np0005603435 nova_compute[239938]: 2026-01-31 04:49:37.097 239942 DEBUG nova.virt.libvirt.driver [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:37 np0005603435 nova_compute[239938]: 2026-01-31 04:49:37.097 239942 DEBUG nova.virt.libvirt.driver [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:37 np0005603435 nova_compute[239938]: 2026-01-31 04:49:37.097 239942 DEBUG nova.virt.libvirt.driver [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] No VIF found with MAC fa:16:3e:30:71:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:49:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 259 op/s
Jan 30 23:49:37 np0005603435 nova_compute[239938]: 2026-01-31 04:49:37.386 239942 DEBUG oslo_concurrency.lockutils [None req-6dc6e956-f5f7-4fe6-8ee0-a33b53d4ff35 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:38 np0005603435 nova_compute[239938]: 2026-01-31 04:49:38.152 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234004225' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 259 op/s
Jan 30 23:49:39 np0005603435 nova_compute[239938]: 2026-01-31 04:49:39.315 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4029751122' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4029751122' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:40 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:40Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:31:3f 10.100.0.12
Jan 30 23:49:40 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:40Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:31:3f 10.100.0.12
Jan 30 23:49:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 30 23:49:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 30 23:49:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3036436369' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3036436369' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 175 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 939 KiB/s rd, 1.1 MiB/s wr, 104 op/s
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 30 23:49:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 30 23:49:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/409420424' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:43 np0005603435 nova_compute[239938]: 2026-01-31 04:49:43.155 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 198 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 641 KiB/s rd, 4.2 MiB/s wr, 214 op/s
Jan 30 23:49:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497402889' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497402889' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/95158762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/95158762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 30 23:49:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 30 23:49:44 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 30 23:49:44 np0005603435 nova_compute[239938]: 2026-01-31 04:49:44.318 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 866 KiB/s rd, 5.0 MiB/s wr, 301 op/s
Jan 30 23:49:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 30 23:49:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 30 23:49:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 30 23:49:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560234672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560234672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 484 KiB/s rd, 2.8 MiB/s wr, 267 op/s
Jan 30 23:49:48 np0005603435 nova_compute[239938]: 2026-01-31 04:49:48.160 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:48 np0005603435 nova_compute[239938]: 2026-01-31 04:49:48.877 239942 DEBUG oslo_concurrency.lockutils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:48 np0005603435 nova_compute[239938]: 2026-01-31 04:49:48.878 239942 DEBUG oslo_concurrency.lockutils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:48 np0005603435 nova_compute[239938]: 2026-01-31 04:49:48.895 239942 DEBUG nova.objects.instance [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lazy-loading 'flavor' on Instance uuid c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:48 np0005603435 nova_compute[239938]: 2026-01-31 04:49:48.932 239942 DEBUG oslo_concurrency.lockutils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.143 239942 DEBUG oslo_concurrency.lockutils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.144 239942 DEBUG oslo_concurrency.lockutils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.144 239942 INFO nova.compute.manager [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Attaching volume 855f932f-aa38-49ce-a6ae-87ad0815fb4b to /dev/vdb#033[00m
Jan 30 23:49:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 30 23:49:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 30 23:49:49 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 30 23:49:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 102 KiB/s wr, 130 op/s
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.320 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.342 239942 DEBUG os_brick.utils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.344 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.355 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.355 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[53e15cb1-d050-485e-86c4-3f9052232067]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.357 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.364 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.365 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1ea8fc-3f12-4235-b562-3e9dd65bfbdd]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.367 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.375 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.375 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[24c0f3bc-2ea9-424a-83ac-112ded88460b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.377 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[d934c34d-548c-4a56-98a7-3df2a4976a17]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.377 239942 DEBUG oslo_concurrency.processutils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.392 239942 DEBUG oslo_concurrency.processutils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.395 239942 DEBUG os_brick.initiator.connectors.lightos [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.396 239942 DEBUG os_brick.initiator.connectors.lightos [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.396 239942 DEBUG os_brick.initiator.connectors.lightos [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.397 239942 DEBUG os_brick.utils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] <== get_connector_properties: return (53ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:49:49 np0005603435 nova_compute[239938]: 2026-01-31 04:49:49.398 239942 DEBUG nova.virt.block_device [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Updating existing volume attachment record: 72a6d001-5d1e-42a6-bf28-da90f9ed88c3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:49:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:49:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/69309750' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:49:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:49:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/69309750' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:49:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1803801111' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.399 239942 DEBUG os_brick.encryptors [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Using volume encryption metadata '{'encryption_key_id': 'bfa69006-3375-494b-af60-aac07db8bb1c', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-855f932f-aa38-49ce-a6ae-87ad0815fb4b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '855f932f-aa38-49ce-a6ae-87ad0815fb4b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c7e02002-03b8-47f5-b10e-39a5dfa4e4d3', 'attached_at': '', 'detached_at': '', 'volume_id': '855f932f-aa38-49ce-a6ae-87ad0815fb4b', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.409 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.447 239942 DEBUG barbicanclient.v1.secrets [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/bfa69006-3375-494b-af60-aac07db8bb1c get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.448 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.469 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.470 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.504 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.505 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.539 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.539 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.576 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.576 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.612 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.614 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/492872895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.658 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.659 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.698 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.699 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.721 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.722 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.743 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.744 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.783 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.784 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.805 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.806 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.823 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.824 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.852 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.853 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.875 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.876 239942 INFO barbicanclient.base [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Calculated Secrets uuid ref: secrets/bfa69006-3375-494b-af60-aac07db8bb1c#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.909 239942 DEBUG barbicanclient.client [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:49:50 np0005603435 nova_compute[239938]: 2026-01-31 04:49:50.910 239942 DEBUG nova.virt.libvirt.host [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 30 23:49:50 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 30 23:49:50 np0005603435 nova_compute[239938]:    <volume>855f932f-aa38-49ce-a6ae-87ad0815fb4b</volume>
Jan 30 23:49:50 np0005603435 nova_compute[239938]:  </usage>
Jan 30 23:49:50 np0005603435 nova_compute[239938]: </secret>
Jan 30 23:49:50 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 30 23:49:51 np0005603435 podman[254789]: 2026-01-31 04:49:51.108423816 +0000 UTC m=+0.067406641 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 30 23:49:51 np0005603435 nova_compute[239938]: 2026-01-31 04:49:51.135 239942 DEBUG nova.objects.instance [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lazy-loading 'flavor' on Instance uuid c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:51 np0005603435 podman[254790]: 2026-01-31 04:49:51.167471717 +0000 UTC m=+0.128434579 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 30 23:49:51 np0005603435 nova_compute[239938]: 2026-01-31 04:49:51.167 239942 DEBUG nova.virt.libvirt.driver [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Attempting to attach volume 855f932f-aa38-49ce-a6ae-87ad0815fb4b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:49:51 np0005603435 nova_compute[239938]: 2026-01-31 04:49:51.171 239942 DEBUG nova.virt.libvirt.guest [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-855f932f-aa38-49ce-a6ae-87ad0815fb4b">
Jan 30 23:49:51 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:49:51 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  <serial>855f932f-aa38-49ce-a6ae-87ad0815fb4b</serial>
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  <encryption format="luks">
Jan 30 23:49:51 np0005603435 nova_compute[239938]:    <secret type="passphrase" uuid="3dd8bdc6-b3d8-42db-ad92-30d604ae08a5"/>
Jan 30 23:49:51 np0005603435 nova_compute[239938]:  </encryption>
Jan 30 23:49:51 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:49:51 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:49:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 107 KiB/s wr, 131 op/s
Jan 30 23:49:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 30 23:49:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 30 23:49:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 30 23:49:52 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:52.859 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:49:52 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:52.861 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:49:52 np0005603435 nova_compute[239938]: 2026-01-31 04:49:52.860 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:53 np0005603435 nova_compute[239938]: 2026-01-31 04:49:53.201 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 27 KiB/s wr, 120 op/s
Jan 30 23:49:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 30 23:49:53 np0005603435 nova_compute[239938]: 2026-01-31 04:49:53.882 239942 DEBUG nova.virt.libvirt.driver [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:53 np0005603435 nova_compute[239938]: 2026-01-31 04:49:53.883 239942 DEBUG nova.virt.libvirt.driver [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:53 np0005603435 nova_compute[239938]: 2026-01-31 04:49:53.883 239942 DEBUG nova.virt.libvirt.driver [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:49:53 np0005603435 nova_compute[239938]: 2026-01-31 04:49:53.883 239942 DEBUG nova.virt.libvirt.driver [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] No VIF found with MAC fa:16:3e:86:31:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:49:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 30 23:49:54 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 30 23:49:54 np0005603435 nova_compute[239938]: 2026-01-31 04:49:54.261 239942 DEBUG oslo_concurrency.lockutils [None req-fec08efc-f3ec-4d2b-a4f0-4e1b858f1d1c fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:54 np0005603435 nova_compute[239938]: 2026-01-31 04:49:54.378 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 29 KiB/s wr, 66 op/s
Jan 30 23:49:55 np0005603435 nova_compute[239938]: 2026-01-31 04:49:55.890 239942 DEBUG oslo_concurrency.lockutils [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:55 np0005603435 nova_compute[239938]: 2026-01-31 04:49:55.892 239942 DEBUG oslo_concurrency.lockutils [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:55 np0005603435 nova_compute[239938]: 2026-01-31 04:49:55.906 239942 INFO nova.compute.manager [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Detaching volume 855f932f-aa38-49ce-a6ae-87ad0815fb4b#033[00m
Jan 30 23:49:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:55.914 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:55.915 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:55.916 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.075 239942 INFO nova.virt.block_device [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Attempting to driver detach volume 855f932f-aa38-49ce-a6ae-87ad0815fb4b from mountpoint /dev/vdb#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.188 239942 DEBUG os_brick.encryptors [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Using volume encryption metadata '{'encryption_key_id': 'bfa69006-3375-494b-af60-aac07db8bb1c', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-855f932f-aa38-49ce-a6ae-87ad0815fb4b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '855f932f-aa38-49ce-a6ae-87ad0815fb4b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c7e02002-03b8-47f5-b10e-39a5dfa4e4d3', 'attached_at': '', 'detached_at': '', 'volume_id': '855f932f-aa38-49ce-a6ae-87ad0815fb4b', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.199 239942 DEBUG nova.virt.libvirt.driver [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Attempting to detach device vdb from instance c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.200 239942 DEBUG nova.virt.libvirt.guest [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-855f932f-aa38-49ce-a6ae-87ad0815fb4b">
Jan 30 23:49:56 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <serial>855f932f-aa38-49ce-a6ae-87ad0815fb4b</serial>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <encryption format="luks">
Jan 30 23:49:56 np0005603435 nova_compute[239938]:    <secret type="passphrase" uuid="3dd8bdc6-b3d8-42db-ad92-30d604ae08a5"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  </encryption>
Jan 30 23:49:56 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:49:56 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.213 239942 INFO nova.virt.libvirt.driver [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Successfully detached device vdb from instance c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 from the persistent domain config.#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.214 239942 DEBUG nova.virt.libvirt.driver [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.215 239942 DEBUG nova.virt.libvirt.guest [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-855f932f-aa38-49ce-a6ae-87ad0815fb4b">
Jan 30 23:49:56 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <serial>855f932f-aa38-49ce-a6ae-87ad0815fb4b</serial>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  <encryption format="luks">
Jan 30 23:49:56 np0005603435 nova_compute[239938]:    <secret type="passphrase" uuid="3dd8bdc6-b3d8-42db-ad92-30d604ae08a5"/>
Jan 30 23:49:56 np0005603435 nova_compute[239938]:  </encryption>
Jan 30 23:49:56 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:49:56 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.270 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769834996.2703407, c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.273 239942 DEBUG nova.virt.libvirt.driver [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.276 239942 INFO nova.virt.libvirt.driver [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Successfully detached device vdb from instance c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 from the live domain config.#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.458 239942 DEBUG nova.objects.instance [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lazy-loading 'flavor' on Instance uuid c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:56 np0005603435 nova_compute[239938]: 2026-01-31 04:49:56.495 239942 DEBUG oslo_concurrency.lockutils [None req-2078cb39-cb66-4aba-8551-08f098ce2080 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 30 23:49:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 30 23:49:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 30 23:49:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:49:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 30 23:49:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 30 23:49:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 30 23:49:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 7.3 KiB/s wr, 90 op/s
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.655 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.656 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.656 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.656 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.657 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.658 239942 INFO nova.compute.manager [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Terminating instance#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.659 239942 DEBUG nova.compute.manager [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:49:57 np0005603435 kernel: tap8336bfaa-b5 (unregistering): left promiscuous mode
Jan 30 23:49:57 np0005603435 NetworkManager[49097]: <info>  [1769834997.7012] device (tap8336bfaa-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.709 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:57Z|00091|binding|INFO|Releasing lport 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a from this chassis (sb_readonly=0)
Jan 30 23:49:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:57Z|00092|binding|INFO|Setting lport 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a down in Southbound
Jan 30 23:49:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:49:57Z|00093|binding|INFO|Removing iface tap8336bfaa-b5 ovn-installed in OVS
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.711 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.717 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:31:3f 10.100.0.12'], port_security=['fa:16:3e:86:31:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c7e02002-03b8-47f5-b10e-39a5dfa4e4d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-475d6d28-d627-470b-bb8c-79572c246996', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd42316247b96450c9011d2b8cc7fbaaf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c74fd1da-8160-4acd-871d-1657c1321987', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7c34256-5506-4cde-bee9-321cad32dece, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.719 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.720 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a in datapath 475d6d28-d627-470b-bb8c-79572c246996 unbound from our chassis#033[00m
Jan 30 23:49:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.723 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 475d6d28-d627-470b-bb8c-79572c246996, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:49:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.724 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe48b7e-30b8-4f26-a5bb-489c54ffabcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.725 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-475d6d28-d627-470b-bb8c-79572c246996 namespace which is not needed anymore#033[00m
Jan 30 23:49:57 np0005603435 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 30 23:49:57 np0005603435 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 14.712s CPU time.
Jan 30 23:49:57 np0005603435 systemd-machined[208030]: Machine qemu-9-instance-00000009 terminated.
Jan 30 23:49:57 np0005603435 neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996[254739]: [NOTICE]   (254744) : haproxy version is 2.8.14-c23fe91
Jan 30 23:49:57 np0005603435 neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996[254739]: [NOTICE]   (254744) : path to executable is /usr/sbin/haproxy
Jan 30 23:49:57 np0005603435 neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996[254739]: [WARNING]  (254744) : Exiting Master process...
Jan 30 23:49:57 np0005603435 neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996[254739]: [ALERT]    (254744) : Current worker (254746) exited with code 143 (Terminated)
Jan 30 23:49:57 np0005603435 neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996[254739]: [WARNING]  (254744) : All workers exited. Exiting... (0)
Jan 30 23:49:57 np0005603435 systemd[1]: libpod-fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3.scope: Deactivated successfully.
Jan 30 23:49:57 np0005603435 podman[254882]: 2026-01-31 04:49:57.856676308 +0000 UTC m=+0.045201063 container died fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 30 23:49:57 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c3dc1d73fa9ad0afa3c4957ab7f6aaa1f4e4968192819590257ecc329ec82a28-merged.mount: Deactivated successfully.
Jan 30 23:49:57 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3-userdata-shm.mount: Deactivated successfully.
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.899 239942 INFO nova.virt.libvirt.driver [-] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Instance destroyed successfully.#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.900 239942 DEBUG nova.objects.instance [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lazy-loading 'resources' on Instance uuid c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:57 np0005603435 podman[254882]: 2026-01-31 04:49:57.902209399 +0000 UTC m=+0.090734184 container cleanup fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 30 23:49:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.914 239942 DEBUG nova.virt.libvirt.vif [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-853286388',display_name='tempest-TestEncryptedCinderVolumes-server-853286388',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-853286388',id=9,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNVBpGbwYpcpfXjxsvtSvcDeT1FaISLwAZ/9BgPFKHGAh9eJ4D0UzsZ5XjlQ3ptcgSI3XCOH89tmHw33jJzCXQ/Onlpic/eBRza/Vw1bmd5yR4SNgC+7T6jusqef58/Bcg==',key_name='tempest-keypair-1863742156',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:49:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d42316247b96450c9011d2b8cc7fbaaf',ramdisk_id='',reservation_id='r-qr6fran1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1286074295',owner_user_name='tempest-TestEncryptedCinderVolumes-1286074295-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:49:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc009f2d3a86499c9b2b11e334162e5e',uuid=c7e02002-03b8-47f5-b10e-39a5dfa4e4d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.915 239942 DEBUG nova.network.os_vif_util [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Converting VIF {"id": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "address": "fa:16:3e:86:31:3f", "network": {"id": "475d6d28-d627-470b-bb8c-79572c246996", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2051303700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d42316247b96450c9011d2b8cc7fbaaf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8336bfaa-b5", "ovs_interfaceid": "8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:49:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.916 239942 DEBUG nova.network.os_vif_util [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a,network=Network(475d6d28-d627-470b-bb8c-79572c246996),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8336bfaa-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.917 239942 DEBUG os_vif [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a,network=Network(475d6d28-d627-470b-bb8c-79572c246996),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8336bfaa-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.919 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.921 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8336bfaa-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:57 np0005603435 systemd[1]: libpod-conmon-fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3.scope: Deactivated successfully.
Jan 30 23:49:57 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 30 23:49:57 np0005603435 podman[254922]: 2026-01-31 04:49:57.976508622 +0000 UTC m=+0.044381935 container remove fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:49:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.978 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[101e2f93-59dd-444a-bbab-6181b2e9f6e7]: (4, ('Sat Jan 31 04:49:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996 (fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3)\nfe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3\nSat Jan 31 04:49:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-475d6d28-d627-470b-bb8c-79572c246996 (fe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3)\nfe7ef8aa44bf6b5106a724a787abae6713009a8cfdafef0b4b68f8d6ddb35cd3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.979 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[befd940c-5bbd-419e-a7ca-2802ae0cb757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.980 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap475d6d28-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.981 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:57 np0005603435 kernel: tap475d6d28-d0: left promiscuous mode
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.984 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:49:57 np0005603435 nova_compute[239938]: 2026-01-31 04:49:57.987 239942 INFO os_vif [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:31:3f,bridge_name='br-int',has_traffic_filtering=True,id=8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a,network=Network(475d6d28-d627-470b-bb8c-79572c246996),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8336bfaa-b5')#033[00m
Jan 30 23:49:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:57.999 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[09ecb31b-51e8-4cda-9105-1a4dd2a2c0d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:58.010 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ce023b71-b3ee-406e-899f-6370d5924ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:58.012 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[98d259c5-b9cc-4351-bccf-4c474c7e5672]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.014 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.019 239942 DEBUG nova.compute.manager [req-bbad7d1a-2cb9-4483-87ce-0b778584cb25 req-379bfe46-1e99-4874-bd6b-02fb573097f6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received event network-vif-unplugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.019 239942 DEBUG oslo_concurrency.lockutils [req-bbad7d1a-2cb9-4483-87ce-0b778584cb25 req-379bfe46-1e99-4874-bd6b-02fb573097f6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.020 239942 DEBUG oslo_concurrency.lockutils [req-bbad7d1a-2cb9-4483-87ce-0b778584cb25 req-379bfe46-1e99-4874-bd6b-02fb573097f6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.021 239942 DEBUG oslo_concurrency.lockutils [req-bbad7d1a-2cb9-4483-87ce-0b778584cb25 req-379bfe46-1e99-4874-bd6b-02fb573097f6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.021 239942 DEBUG nova.compute.manager [req-bbad7d1a-2cb9-4483-87ce-0b778584cb25 req-379bfe46-1e99-4874-bd6b-02fb573097f6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] No waiting events found dispatching network-vif-unplugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.022 239942 DEBUG nova.compute.manager [req-bbad7d1a-2cb9-4483-87ce-0b778584cb25 req-379bfe46-1e99-4874-bd6b-02fb573097f6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received event network-vif-unplugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:49:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:58.029 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f7334f11-cda1-403f-b57e-28bb3098394a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402333, 'reachable_time': 19364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254952, 'error': None, 'target': 'ovnmeta-475d6d28-d627-470b-bb8c-79572c246996', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:58 np0005603435 systemd[1]: run-netns-ovnmeta\x2d475d6d28\x2dd627\x2d470b\x2dbb8c\x2d79572c246996.mount: Deactivated successfully.
Jan 30 23:49:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:58.034 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-475d6d28-d627-470b-bb8c-79572c246996 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:49:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:58.035 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[25cfb818-2af7-43b0-b9d4-0c4d435be0ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.336 239942 INFO nova.virt.libvirt.driver [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Deleting instance files /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_del#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.337 239942 INFO nova.virt.libvirt.driver [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Deletion of /var/lib/nova/instances/c7e02002-03b8-47f5-b10e-39a5dfa4e4d3_del complete#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.405 239942 INFO nova.compute.manager [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.406 239942 DEBUG oslo.service.loopingcall [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.406 239942 DEBUG nova.compute.manager [-] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:49:58 np0005603435 nova_compute[239938]: 2026-01-31 04:49:58.407 239942 DEBUG nova.network.neutron [-] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:49:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:49:58.864 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:49:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:49:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3197348994' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.161 239942 DEBUG oslo_concurrency.lockutils [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.161 239942 DEBUG oslo_concurrency.lockutils [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.178 239942 INFO nova.compute.manager [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Detaching volume 2d8a5544-500e-424a-b08b-a486887dcd73#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.195 239942 DEBUG nova.network.neutron [-] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.218 239942 INFO nova.compute.manager [-] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Took 0.81 seconds to deallocate network for instance.#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.265 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.266 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:49:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.4 KiB/s wr, 52 op/s
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.327 239942 INFO nova.virt.block_device [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Attempting to driver detach volume 2d8a5544-500e-424a-b08b-a486887dcd73 from mountpoint /dev/vdb#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.340 239942 DEBUG nova.virt.libvirt.driver [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Attempting to detach device vdb from instance 3dfd6853-c0e1-446c-9f5d-097c8af910db from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.340 239942 DEBUG nova.virt.libvirt.guest [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-2d8a5544-500e-424a-b08b-a486887dcd73">
Jan 30 23:49:59 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <serial>2d8a5544-500e-424a-b08b-a486887dcd73</serial>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:49:59 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:49:59 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.365 239942 DEBUG oslo_concurrency.processutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.386 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.392 239942 INFO nova.virt.libvirt.driver [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Successfully detached device vdb from instance 3dfd6853-c0e1-446c-9f5d-097c8af910db from the persistent domain config.#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.392 239942 DEBUG nova.virt.libvirt.driver [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3dfd6853-c0e1-446c-9f5d-097c8af910db from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.393 239942 DEBUG nova.virt.libvirt.guest [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-2d8a5544-500e-424a-b08b-a486887dcd73">
Jan 30 23:49:59 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <serial>2d8a5544-500e-424a-b08b-a486887dcd73</serial>
Jan 30 23:49:59 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:49:59 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:49:59 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.507 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769834999.5072455, 3dfd6853-c0e1-446c-9f5d-097c8af910db => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.510 239942 DEBUG nova.virt.libvirt.driver [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3dfd6853-c0e1-446c-9f5d-097c8af910db _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.513 239942 INFO nova.virt.libvirt.driver [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Successfully detached device vdb from instance 3dfd6853-c0e1-446c-9f5d-097c8af910db from the live domain config.#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.658 239942 DEBUG nova.objects.instance [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'flavor' on Instance uuid 3dfd6853-c0e1-446c-9f5d-097c8af910db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.698 239942 DEBUG oslo_concurrency.lockutils [None req-109f7468-7c6d-4c31-8015-f59470106498 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:49:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4098329872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.918 239942 DEBUG oslo_concurrency.processutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:49:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.924 239942 DEBUG nova.compute.provider_tree [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.942 239942 DEBUG nova.scheduler.client.report [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:49:59 np0005603435 nova_compute[239938]: 2026-01-31 04:49:59.966 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:49:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 30 23:49:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.002 239942 INFO nova.scheduler.client.report [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Deleted allocations for instance c7e02002-03b8-47f5-b10e-39a5dfa4e4d3#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.069 239942 DEBUG oslo_concurrency.lockutils [None req-3d77af26-e480-4614-aee2-238f0083bbf8 fc009f2d3a86499c9b2b11e334162e5e d42316247b96450c9011d2b8cc7fbaaf - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.102 239942 DEBUG nova.compute.manager [req-108dd897-3f13-4439-99c3-cb6cd112ac7d req-02423494-a9f9-40d0-baac-d733c9463544 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received event network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.103 239942 DEBUG oslo_concurrency.lockutils [req-108dd897-3f13-4439-99c3-cb6cd112ac7d req-02423494-a9f9-40d0-baac-d733c9463544 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.104 239942 DEBUG oslo_concurrency.lockutils [req-108dd897-3f13-4439-99c3-cb6cd112ac7d req-02423494-a9f9-40d0-baac-d733c9463544 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.104 239942 DEBUG oslo_concurrency.lockutils [req-108dd897-3f13-4439-99c3-cb6cd112ac7d req-02423494-a9f9-40d0-baac-d733c9463544 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "c7e02002-03b8-47f5-b10e-39a5dfa4e4d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.104 239942 DEBUG nova.compute.manager [req-108dd897-3f13-4439-99c3-cb6cd112ac7d req-02423494-a9f9-40d0-baac-d733c9463544 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] No waiting events found dispatching network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.105 239942 WARNING nova.compute.manager [req-108dd897-3f13-4439-99c3-cb6cd112ac7d req-02423494-a9f9-40d0-baac-d733c9463544 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received unexpected event network-vif-plugged-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.105 239942 DEBUG nova.compute.manager [req-108dd897-3f13-4439-99c3-cb6cd112ac7d req-02423494-a9f9-40d0-baac-d733c9463544 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Received event network-vif-deleted-8336bfaa-b5cd-4f5b-8d0b-b3aebdfb172a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.505 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.506 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.506 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.506 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.507 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.508 239942 INFO nova.compute.manager [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Terminating instance#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.510 239942 DEBUG nova.compute.manager [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:50:00 np0005603435 kernel: tap84cc8fc9-7d (unregistering): left promiscuous mode
Jan 30 23:50:00 np0005603435 NetworkManager[49097]: <info>  [1769835000.5653] device (tap84cc8fc9-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:50:00 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:00Z|00094|binding|INFO|Releasing lport 84cc8fc9-7d52-4528-bad3-524644ec103e from this chassis (sb_readonly=0)
Jan 30 23:50:00 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:00Z|00095|binding|INFO|Setting lport 84cc8fc9-7d52-4528-bad3-524644ec103e down in Southbound
Jan 30 23:50:00 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:00Z|00096|binding|INFO|Removing iface tap84cc8fc9-7d ovn-installed in OVS
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.573 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.575 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:00.580 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:71:ab 10.100.0.11'], port_security=['fa:16:3e:30:71:ab 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3dfd6853-c0e1-446c-9f5d-097c8af910db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b11aff4b494f4eb1376cfe5754bac8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2c73b112-e396-4240-808c-5bf45e432461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c4453b0-f040-4fe4-88f1-8a0ec8ff54c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=84cc8fc9-7d52-4528-bad3-524644ec103e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:50:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:00.581 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 84cc8fc9-7d52-4528-bad3-524644ec103e in datapath 28e37664-8d81-4a45-8e12-f0b45b43b4cf unbound from our chassis#033[00m
Jan 30 23:50:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:00.582 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28e37664-8d81-4a45-8e12-f0b45b43b4cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:50:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:00.583 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e4533fe7-ab40-410e-9af6-94c5b682f965]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:00.583 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf namespace which is not needed anymore#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.590 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:00 np0005603435 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 30 23:50:00 np0005603435 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 12.966s CPU time.
Jan 30 23:50:00 np0005603435 systemd-machined[208030]: Machine qemu-8-instance-00000008 terminated.
Jan 30 23:50:00 np0005603435 NetworkManager[49097]: <info>  [1769835000.7298] manager: (tap84cc8fc9-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.733 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.738 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.750 239942 INFO nova.virt.libvirt.driver [-] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Instance destroyed successfully.#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.750 239942 DEBUG nova.objects.instance [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lazy-loading 'resources' on Instance uuid 3dfd6853-c0e1-446c-9f5d-097c8af910db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.764 239942 DEBUG nova.virt.libvirt.vif [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:49:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1643995663',display_name='tempest-VolumesBackupsTest-instance-1643995663',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1643995663',id=8,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7M3Kx9HnlgPwJ3q2vcmLgKtbzv68YcGrJnBcWrW+oC+Lbh28Jv7i2/KnMnVyUUAQ/VX5n+Z+i0mqfZMAcVOh2jZJeWGMs9dMkYG6AFIpYg7M6nh0Y89qdXxvTNQOiLIg==',key_name='tempest-keypair-377641076',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:49:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b8b11aff4b494f4eb1376cfe5754bac8',ramdisk_id='',reservation_id='r-53r2xod5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1503004541',owner_user_name='tempest-VolumesBackupsTest-1503004541-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:49:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f51271330a6d46498b473f0d2595c3ac',uuid=3dfd6853-c0e1-446c-9f5d-097c8af910db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.765 239942 DEBUG nova.network.os_vif_util [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converting VIF {"id": "84cc8fc9-7d52-4528-bad3-524644ec103e", "address": "fa:16:3e:30:71:ab", "network": {"id": "28e37664-8d81-4a45-8e12-f0b45b43b4cf", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-504768950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b11aff4b494f4eb1376cfe5754bac8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84cc8fc9-7d", "ovs_interfaceid": "84cc8fc9-7d52-4528-bad3-524644ec103e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.767 239942 DEBUG nova.network.os_vif_util [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:71:ab,bridge_name='br-int',has_traffic_filtering=True,id=84cc8fc9-7d52-4528-bad3-524644ec103e,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84cc8fc9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.768 239942 DEBUG os_vif [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:71:ab,bridge_name='br-int',has_traffic_filtering=True,id=84cc8fc9-7d52-4528-bad3-524644ec103e,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84cc8fc9-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.770 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.771 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84cc8fc9-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.774 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.777 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.780 239942 INFO os_vif [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:71:ab,bridge_name='br-int',has_traffic_filtering=True,id=84cc8fc9-7d52-4528-bad3-524644ec103e,network=Network(28e37664-8d81-4a45-8e12-f0b45b43b4cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84cc8fc9-7d')#033[00m
Jan 30 23:50:00 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[254233]: [NOTICE]   (254237) : haproxy version is 2.8.14-c23fe91
Jan 30 23:50:00 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[254233]: [NOTICE]   (254237) : path to executable is /usr/sbin/haproxy
Jan 30 23:50:00 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[254233]: [WARNING]  (254237) : Exiting Master process...
Jan 30 23:50:00 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[254233]: [ALERT]    (254237) : Current worker (254239) exited with code 143 (Terminated)
Jan 30 23:50:00 np0005603435 neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf[254233]: [WARNING]  (254237) : All workers exited. Exiting... (0)
Jan 30 23:50:00 np0005603435 systemd[1]: libpod-53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0.scope: Deactivated successfully.
Jan 30 23:50:00 np0005603435 podman[255005]: 2026-01-31 04:50:00.820139455 +0000 UTC m=+0.131158493 container died 53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.840 239942 DEBUG nova.compute.manager [req-af772d43-035f-4999-a0c8-758973306dc0 req-377c8465-27c7-477f-84ef-7e8a8abf91e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received event network-vif-unplugged-84cc8fc9-7d52-4528-bad3-524644ec103e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.840 239942 DEBUG oslo_concurrency.lockutils [req-af772d43-035f-4999-a0c8-758973306dc0 req-377c8465-27c7-477f-84ef-7e8a8abf91e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.840 239942 DEBUG oslo_concurrency.lockutils [req-af772d43-035f-4999-a0c8-758973306dc0 req-377c8465-27c7-477f-84ef-7e8a8abf91e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.841 239942 DEBUG oslo_concurrency.lockutils [req-af772d43-035f-4999-a0c8-758973306dc0 req-377c8465-27c7-477f-84ef-7e8a8abf91e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.841 239942 DEBUG nova.compute.manager [req-af772d43-035f-4999-a0c8-758973306dc0 req-377c8465-27c7-477f-84ef-7e8a8abf91e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] No waiting events found dispatching network-vif-unplugged-84cc8fc9-7d52-4528-bad3-524644ec103e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:50:00 np0005603435 nova_compute[239938]: 2026-01-31 04:50:00.841 239942 DEBUG nova.compute.manager [req-af772d43-035f-4999-a0c8-758973306dc0 req-377c8465-27c7-477f-84ef-7e8a8abf91e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received event network-vif-unplugged-84cc8fc9-7d52-4528-bad3-524644ec103e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:50:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0-userdata-shm.mount: Deactivated successfully.
Jan 30 23:50:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-6ad12913ffc73102d2ac6a316b02a6e339e24c41bf35486f80b4621a66c223c5-merged.mount: Deactivated successfully.
Jan 30 23:50:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 30 23:50:01 np0005603435 podman[255005]: 2026-01-31 04:50:01.129097276 +0000 UTC m=+0.440116314 container cleanup 53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 30 23:50:01 np0005603435 systemd[1]: libpod-conmon-53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0.scope: Deactivated successfully.
Jan 30 23:50:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 30 23:50:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 30 23:50:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 174 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 7.1 KiB/s wr, 52 op/s
Jan 30 23:50:01 np0005603435 podman[255063]: 2026-01-31 04:50:01.324582425 +0000 UTC m=+0.171716856 container remove 53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.330 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1f32c21c-2cfe-4fc8-8f4e-fd4397301080]: (4, ('Sat Jan 31 04:50:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf (53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0)\n53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0\nSat Jan 31 04:50:01 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf (53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0)\n53339cb06a99c13d0964b654b7d9bad29ba149f40fca84655ef1ff9af7a147b0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.332 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b4512037-cb0f-4a4e-851e-db336cf126bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.333 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28e37664-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:50:01 np0005603435 nova_compute[239938]: 2026-01-31 04:50:01.376 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:01 np0005603435 kernel: tap28e37664-80: left promiscuous mode
Jan 30 23:50:01 np0005603435 nova_compute[239938]: 2026-01-31 04:50:01.389 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.393 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e5648c35-f61a-4fb2-873b-d1cb35b9b848]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.407 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2a075aaf-6bb3-4bbf-8abe-ee829f4148db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.408 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1e021700-ddc5-432a-a914-bbe7e73efc9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.423 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[63a2eeb2-74cb-4aa2-8473-a566d6f12661]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 401165, 'reachable_time': 29381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255078, 'error': None, 'target': 'ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.425 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28e37664-8d81-4a45-8e12-f0b45b43b4cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:50:01 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:01.425 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[9d354bcc-91cc-4b18-9367-8eeb757fed50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:01 np0005603435 systemd[1]: run-netns-ovnmeta\x2d28e37664\x2d8d81\x2d4a45\x2d8e12\x2df0b45b43b4cf.mount: Deactivated successfully.
Jan 30 23:50:01 np0005603435 nova_compute[239938]: 2026-01-31 04:50:01.576 239942 INFO nova.virt.libvirt.driver [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Deleting instance files /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db_del#033[00m
Jan 30 23:50:01 np0005603435 nova_compute[239938]: 2026-01-31 04:50:01.577 239942 INFO nova.virt.libvirt.driver [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Deletion of /var/lib/nova/instances/3dfd6853-c0e1-446c-9f5d-097c8af910db_del complete#033[00m
Jan 30 23:50:01 np0005603435 nova_compute[239938]: 2026-01-31 04:50:01.626 239942 INFO nova.compute.manager [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:50:01 np0005603435 nova_compute[239938]: 2026-01-31 04:50:01.627 239942 DEBUG oslo.service.loopingcall [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:50:01 np0005603435 nova_compute[239938]: 2026-01-31 04:50:01.627 239942 DEBUG nova.compute.manager [-] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:50:01 np0005603435 nova_compute[239938]: 2026-01-31 04:50:01.628 239942 DEBUG nova.network.neutron [-] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:50:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1963757974' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1963757974' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:02 np0005603435 nova_compute[239938]: 2026-01-31 04:50:02.928 239942 DEBUG nova.compute.manager [req-dea0d0b5-64a8-4948-a883-c069030f2a88 req-01ae002e-dcc1-418d-8664-3457acfcf8a8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received event network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:02 np0005603435 nova_compute[239938]: 2026-01-31 04:50:02.928 239942 DEBUG oslo_concurrency.lockutils [req-dea0d0b5-64a8-4948-a883-c069030f2a88 req-01ae002e-dcc1-418d-8664-3457acfcf8a8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:02 np0005603435 nova_compute[239938]: 2026-01-31 04:50:02.929 239942 DEBUG oslo_concurrency.lockutils [req-dea0d0b5-64a8-4948-a883-c069030f2a88 req-01ae002e-dcc1-418d-8664-3457acfcf8a8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:02 np0005603435 nova_compute[239938]: 2026-01-31 04:50:02.929 239942 DEBUG oslo_concurrency.lockutils [req-dea0d0b5-64a8-4948-a883-c069030f2a88 req-01ae002e-dcc1-418d-8664-3457acfcf8a8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:02 np0005603435 nova_compute[239938]: 2026-01-31 04:50:02.929 239942 DEBUG nova.compute.manager [req-dea0d0b5-64a8-4948-a883-c069030f2a88 req-01ae002e-dcc1-418d-8664-3457acfcf8a8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] No waiting events found dispatching network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:50:02 np0005603435 nova_compute[239938]: 2026-01-31 04:50:02.929 239942 WARNING nova.compute.manager [req-dea0d0b5-64a8-4948-a883-c069030f2a88 req-01ae002e-dcc1-418d-8664-3457acfcf8a8 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received unexpected event network-vif-plugged-84cc8fc9-7d52-4528-bad3-524644ec103e for instance with vm_state active and task_state deleting.#033[00m
Jan 30 23:50:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:50:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1598979160' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.048 239942 DEBUG nova.network.neutron [-] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.068 239942 INFO nova.compute.manager [-] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Took 1.44 seconds to deallocate network for instance.#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.122 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.123 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.130 239942 DEBUG nova.compute.manager [req-7e2c67cd-8b00-43f9-b283-98afb1036ed2 req-8e3a7c04-8eb5-453c-97e5-51d26159aecd c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Received event network-vif-deleted-84cc8fc9-7d52-4528-bad3-524644ec103e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.164 239942 DEBUG oslo_concurrency.processutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 79 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 11 KiB/s wr, 183 op/s
Jan 30 23:50:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:50:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795061482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.668 239942 DEBUG oslo_concurrency.processutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.677 239942 DEBUG nova.compute.provider_tree [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.702 239942 DEBUG nova.scheduler.client.report [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.747 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.780 239942 INFO nova.scheduler.client.report [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Deleted allocations for instance 3dfd6853-c0e1-446c-9f5d-097c8af910db#033[00m
Jan 30 23:50:03 np0005603435 nova_compute[239938]: 2026-01-31 04:50:03.867 239942 DEBUG oslo_concurrency.lockutils [None req-f35496ec-36de-4607-bd0c-4facad6fe284 f51271330a6d46498b473f0d2595c3ac b8b11aff4b494f4eb1376cfe5754bac8 - - default default] Lock "3dfd6853-c0e1-446c-9f5d-097c8af910db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 30 23:50:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 30 23:50:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 30 23:50:04 np0005603435 nova_compute[239938]: 2026-01-31 04:50:04.383 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 30 23:50:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 30 23:50:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 30 23:50:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 54 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 15 KiB/s wr, 227 op/s
Jan 30 23:50:05 np0005603435 nova_compute[239938]: 2026-01-31 04:50:05.338 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:05 np0005603435 nova_compute[239938]: 2026-01-31 04:50:05.447 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:05 np0005603435 nova_compute[239938]: 2026-01-31 04:50:05.802 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 30 23:50:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 30 23:50:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:50:06
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.meta', 'default.rgw.control', 'vms', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data']
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:50:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 30 23:50:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:50:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:50:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 8.5 KiB/s wr, 116 op/s
Jan 30 23:50:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:50:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:50:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/67061690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:50:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 30 23:50:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 30 23:50:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 30 23:50:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 5.7 KiB/s wr, 89 op/s
Jan 30 23:50:09 np0005603435 nova_compute[239938]: 2026-01-31 04:50:09.385 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 30 23:50:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 30 23:50:10 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 30 23:50:10 np0005603435 nova_compute[239938]: 2026-01-31 04:50:10.806 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 30 23:50:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 44 op/s
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:12 np0005603435 podman[255317]: 2026-01-31 04:50:12.096146174 +0000 UTC m=+0.023516329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:50:12 np0005603435 podman[255317]: 2026-01-31 04:50:12.314141606 +0000 UTC m=+0.241511741 container create abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:50:12 np0005603435 systemd[1]: Started libpod-conmon-abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229.scope.
Jan 30 23:50:12 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:12 np0005603435 podman[255317]: 2026-01-31 04:50:12.754372263 +0000 UTC m=+0.681742438 container init abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:50:12 np0005603435 podman[255317]: 2026-01-31 04:50:12.763052909 +0000 UTC m=+0.690423014 container start abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:50:12 np0005603435 systemd[1]: libpod-abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229.scope: Deactivated successfully.
Jan 30 23:50:12 np0005603435 frosty_bartik[255333]: 167 167
Jan 30 23:50:12 np0005603435 conmon[255333]: conmon abcc4d9d650d9eb88bbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229.scope/container/memory.events
Jan 30 23:50:12 np0005603435 podman[255317]: 2026-01-31 04:50:12.88702155 +0000 UTC m=+0.814391715 container attach abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 30 23:50:12 np0005603435 podman[255317]: 2026-01-31 04:50:12.88994858 +0000 UTC m=+0.817318695 container died abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:50:12 np0005603435 nova_compute[239938]: 2026-01-31 04:50:12.892 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769834997.8876803, c7e02002-03b8-47f5-b10e-39a5dfa4e4d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:50:12 np0005603435 nova_compute[239938]: 2026-01-31 04:50:12.893 239942 INFO nova.compute.manager [-] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:50:12 np0005603435 nova_compute[239938]: 2026-01-31 04:50:12.913 239942 DEBUG nova.compute.manager [None req-bf3fb7d5-9f91-4b5f-8811-a05c98a84a3c - - - - - -] [instance: c7e02002-03b8-47f5-b10e-39a5dfa4e4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:50:13 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e8a34ab7a65757af347d2a8debbad97ca539da6de909bedb40c327c80aaa8b51-merged.mount: Deactivated successfully.
Jan 30 23:50:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 4.5 KiB/s wr, 145 op/s
Jan 30 23:50:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2127417510' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2127417510' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:13 np0005603435 podman[255317]: 2026-01-31 04:50:13.600159662 +0000 UTC m=+1.527529747 container remove abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:50:13 np0005603435 systemd[1]: libpod-conmon-abcc4d9d650d9eb88bbb667eb9992ea54ce913388dc98c329231b2f5a5f9e229.scope: Deactivated successfully.
Jan 30 23:50:13 np0005603435 podman[255359]: 2026-01-31 04:50:13.784450204 +0000 UTC m=+0.033070815 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:50:13 np0005603435 podman[255359]: 2026-01-31 04:50:13.888449452 +0000 UTC m=+0.137070023 container create c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:50:13 np0005603435 nova_compute[239938]: 2026-01-31 04:50:13.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:14 np0005603435 systemd[1]: Started libpod-conmon-c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0.scope.
Jan 30 23:50:14 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c27fc58df40a9e387c95815066080f7fb56088ccae1cd23841eb5734c8050/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c27fc58df40a9e387c95815066080f7fb56088ccae1cd23841eb5734c8050/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c27fc58df40a9e387c95815066080f7fb56088ccae1cd23841eb5734c8050/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:14 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c27fc58df40a9e387c95815066080f7fb56088ccae1cd23841eb5734c8050/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.434 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:14 np0005603435 podman[255359]: 2026-01-31 04:50:14.475051312 +0000 UTC m=+0.723671903 container init c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lalande, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:50:14 np0005603435 podman[255359]: 2026-01-31 04:50:14.484178268 +0000 UTC m=+0.732798839 container start c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:50:14 np0005603435 podman[255359]: 2026-01-31 04:50:14.63343443 +0000 UTC m=+0.882055041 container attach c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.882 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.898 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.899 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.899 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.912 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.912 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.913 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.913 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:14 np0005603435 nova_compute[239938]: 2026-01-31 04:50:14.914 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:50:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 30 23:50:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]: [
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:    {
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "available": false,
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "being_replaced": false,
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "ceph_device_lvm": false,
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "lsm_data": {},
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "lvs": [],
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "path": "/dev/sr0",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "rejected_reasons": [
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "Insufficient space (<5GB)",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "Has a FileSystem"
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        ],
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        "sys_api": {
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "actuators": null,
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "device_nodes": [
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:                "sr0"
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            ],
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "devname": "sr0",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "human_readable_size": "482.00 KB",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "id_bus": "ata",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "model": "QEMU DVD-ROM",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "nr_requests": "2",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "parent": "/dev/sr0",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "partitions": {},
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "path": "/dev/sr0",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "removable": "1",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "rev": "2.5+",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "ro": "0",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "rotational": "1",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "sas_address": "",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "sas_device_handle": "",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "scheduler_mode": "mq-deadline",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "sectors": 0,
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "sectorsize": "2048",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "size": 493568.0,
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "support_discard": "2048",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "type": "disk",
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:            "vendor": "QEMU"
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:        }
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]:    }
Jan 30 23:50:15 np0005603435 awesome_lalande[255375]: ]
Jan 30 23:50:15 np0005603435 systemd[1]: libpod-c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0.scope: Deactivated successfully.
Jan 30 23:50:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 30 23:50:15 np0005603435 podman[256172]: 2026-01-31 04:50:15.164213233 +0000 UTC m=+0.025254850 container died c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:50:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 4.8 KiB/s wr, 159 op/s
Jan 30 23:50:15 np0005603435 nova_compute[239938]: 2026-01-31 04:50:15.747 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835000.7464688, 3dfd6853-c0e1-446c-9f5d-097c8af910db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:50:15 np0005603435 nova_compute[239938]: 2026-01-31 04:50:15.748 239942 INFO nova.compute.manager [-] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:50:15 np0005603435 systemd[1]: var-lib-containers-storage-overlay-573c27fc58df40a9e387c95815066080f7fb56088ccae1cd23841eb5734c8050-merged.mount: Deactivated successfully.
Jan 30 23:50:15 np0005603435 nova_compute[239938]: 2026-01-31 04:50:15.781 239942 DEBUG nova.compute.manager [None req-67f36c32-351c-4984-a16e-bc40b5178f72 - - - - - -] [instance: 3dfd6853-c0e1-446c-9f5d-097c8af910db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:50:15 np0005603435 nova_compute[239938]: 2026-01-31 04:50:15.809 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:15 np0005603435 podman[256172]: 2026-01-31 04:50:15.99850413 +0000 UTC m=+0.859545767 container remove c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lalande, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:50:16 np0005603435 systemd[1]: libpod-conmon-c633922d1976a865ec84954e4fe0ddf4bd1a71bb9941f445a951a1e8f62bb2a0.scope: Deactivated successfully.
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:50:16 np0005603435 nova_compute[239938]: 2026-01-31 04:50:16.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:16 np0005603435 nova_compute[239938]: 2026-01-31 04:50:16.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:16 np0005603435 nova_compute[239938]: 2026-01-31 04:50:16.910 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:16 np0005603435 nova_compute[239938]: 2026-01-31 04:50:16.910 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:16 np0005603435 nova_compute[239938]: 2026-01-31 04:50:16.910 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:16 np0005603435 nova_compute[239938]: 2026-01-31 04:50:16.910 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:50:16 np0005603435 nova_compute[239938]: 2026-01-31 04:50:16.911 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1271524165' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1271524165' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:16 np0005603435 podman[256252]: 2026-01-31 04:50:16.990913448 +0000 UTC m=+0.047532779 container create f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_torvalds, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:50:17 np0005603435 systemd[1]: Started libpod-conmon-f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd.scope.
Jan 30 23:50:17 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:17 np0005603435 podman[256252]: 2026-01-31 04:50:17.044927739 +0000 UTC m=+0.101547110 container init f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:50:17 np0005603435 podman[256252]: 2026-01-31 04:50:17.056883973 +0000 UTC m=+0.113503314 container start f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 30 23:50:17 np0005603435 podman[256252]: 2026-01-31 04:50:17.060744675 +0000 UTC m=+0.117364006 container attach f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 30 23:50:17 np0005603435 systemd[1]: libpod-f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd.scope: Deactivated successfully.
Jan 30 23:50:17 np0005603435 podman[256252]: 2026-01-31 04:50:16.966182631 +0000 UTC m=+0.022801972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:50:17 np0005603435 competent_torvalds[256268]: 167 167
Jan 30 23:50:17 np0005603435 conmon[256268]: conmon f065efe7b157701bda4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd.scope/container/memory.events
Jan 30 23:50:17 np0005603435 podman[256252]: 2026-01-31 04:50:17.063765886 +0000 UTC m=+0.120385217 container died f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:50:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay-516151533d0170cf21cc6174c717bfea96de5d99c78c5ad2df894cc78a8e0072-merged.mount: Deactivated successfully.
Jan 30 23:50:17 np0005603435 podman[256252]: 2026-01-31 04:50:17.111640422 +0000 UTC m=+0.168259743 container remove f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:50:17 np0005603435 systemd[1]: libpod-conmon-f065efe7b157701bda4f8ec2cdba9cb88c1ed675632d6214e453dfb5820b9edd.scope: Deactivated successfully.
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.7655138275450526e-06 of space, bias 1.0, pg target 0.0011296541482635157 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.937175724153493e-06 of space, bias 1.0, pg target 0.0014811527172460478 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.5126856973862566e-06 of space, bias 1.0, pg target 0.000453805709215877 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665341081101967 of space, bias 1.0, pg target 0.199960232433059 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.189817553410471e-07 of space, bias 4.0, pg target 0.0008627781064092565 quantized to 16 (current 16)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:50:17 np0005603435 podman[256312]: 2026-01-31 04:50:17.232673964 +0000 UTC m=+0.044052826 container create 62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pascal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:50:17 np0005603435 systemd[1]: Started libpod-conmon-62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16.scope.
Jan 30 23:50:17 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9c5c0be5a8d77d5508cf99ce5b98f86b5b52031cdce3b7635c3846ca1f7ab5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9c5c0be5a8d77d5508cf99ce5b98f86b5b52031cdce3b7635c3846ca1f7ab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9c5c0be5a8d77d5508cf99ce5b98f86b5b52031cdce3b7635c3846ca1f7ab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9c5c0be5a8d77d5508cf99ce5b98f86b5b52031cdce3b7635c3846ca1f7ab5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9c5c0be5a8d77d5508cf99ce5b98f86b5b52031cdce3b7635c3846ca1f7ab5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:17 np0005603435 podman[256312]: 2026-01-31 04:50:17.304095529 +0000 UTC m=+0.115474371 container init 62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:50:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 6.2 KiB/s wr, 252 op/s
Jan 30 23:50:17 np0005603435 podman[256312]: 2026-01-31 04:50:17.211612925 +0000 UTC m=+0.022991867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:50:17 np0005603435 podman[256312]: 2026-01-31 04:50:17.318913111 +0000 UTC m=+0.130291973 container start 62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pascal, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:50:17 np0005603435 podman[256312]: 2026-01-31 04:50:17.322734671 +0000 UTC m=+0.134113503 container attach 62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pascal, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:50:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:50:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:17 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:50:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:50:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824228494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:50:17 np0005603435 nova_compute[239938]: 2026-01-31 04:50:17.444 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:17 np0005603435 nova_compute[239938]: 2026-01-31 04:50:17.634 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:50:17 np0005603435 nova_compute[239938]: 2026-01-31 04:50:17.636 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4586MB free_disk=59.988055176101625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:50:17 np0005603435 nova_compute[239938]: 2026-01-31 04:50:17.637 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:17 np0005603435 nova_compute[239938]: 2026-01-31 04:50:17.638 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:17 np0005603435 hardcore_pascal[256329]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:50:17 np0005603435 hardcore_pascal[256329]: --> All data devices are unavailable
Jan 30 23:50:17 np0005603435 nova_compute[239938]: 2026-01-31 04:50:17.746 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:50:17 np0005603435 nova_compute[239938]: 2026-01-31 04:50:17.747 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:50:17 np0005603435 systemd[1]: libpod-62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16.scope: Deactivated successfully.
Jan 30 23:50:17 np0005603435 podman[256312]: 2026-01-31 04:50:17.767793122 +0000 UTC m=+0.579171954 container died 62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:50:17 np0005603435 nova_compute[239938]: 2026-01-31 04:50:17.777 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay-2f9c5c0be5a8d77d5508cf99ce5b98f86b5b52031cdce3b7635c3846ca1f7ab5-merged.mount: Deactivated successfully.
Jan 30 23:50:17 np0005603435 podman[256312]: 2026-01-31 04:50:17.81281273 +0000 UTC m=+0.624191592 container remove 62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_pascal, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:50:17 np0005603435 systemd[1]: libpod-conmon-62716d476a43d3f35492d25f0a499d6e3c2c47fb046ff44a3661b045f1bb2e16.scope: Deactivated successfully.
Jan 30 23:50:18 np0005603435 podman[256445]: 2026-01-31 04:50:18.234182189 +0000 UTC m=+0.060647321 container create 0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_greider, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:50:18 np0005603435 systemd[1]: Started libpod-conmon-0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587.scope.
Jan 30 23:50:18 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:18 np0005603435 podman[256445]: 2026-01-31 04:50:18.209201006 +0000 UTC m=+0.035666128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:50:18 np0005603435 podman[256445]: 2026-01-31 04:50:18.310120349 +0000 UTC m=+0.136585531 container init 0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:50:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:50:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4215525054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:50:18 np0005603435 podman[256445]: 2026-01-31 04:50:18.318518379 +0000 UTC m=+0.144983501 container start 0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 30 23:50:18 np0005603435 podman[256445]: 2026-01-31 04:50:18.323997969 +0000 UTC m=+0.150463091 container attach 0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_greider, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:50:18 np0005603435 gracious_greider[256463]: 167 167
Jan 30 23:50:18 np0005603435 systemd[1]: libpod-0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587.scope: Deactivated successfully.
Jan 30 23:50:18 np0005603435 podman[256445]: 2026-01-31 04:50:18.326009596 +0000 UTC m=+0.152474718 container died 0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_greider, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:50:18 np0005603435 nova_compute[239938]: 2026-01-31 04:50:18.351 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:18 np0005603435 nova_compute[239938]: 2026-01-31 04:50:18.361 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:50:18 np0005603435 systemd[1]: var-lib-containers-storage-overlay-cf3bb49c3a297c2457f017f1dd18560ca8a79dd0954a5a3309380b27069b9954-merged.mount: Deactivated successfully.
Jan 30 23:50:18 np0005603435 podman[256445]: 2026-01-31 04:50:18.383172043 +0000 UTC m=+0.209637125 container remove 0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_greider, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:50:18 np0005603435 nova_compute[239938]: 2026-01-31 04:50:18.383 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:50:18 np0005603435 systemd[1]: libpod-conmon-0796b280b3c70f3e3b51062100380f0b6e4f4ec3078e034814e27f292a505587.scope: Deactivated successfully.
Jan 30 23:50:18 np0005603435 nova_compute[239938]: 2026-01-31 04:50:18.407 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:50:18 np0005603435 nova_compute[239938]: 2026-01-31 04:50:18.407 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:18 np0005603435 podman[256489]: 2026-01-31 04:50:18.535876466 +0000 UTC m=+0.048353858 container create 95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:50:18 np0005603435 systemd[1]: Started libpod-conmon-95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1.scope.
Jan 30 23:50:18 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:18 np0005603435 podman[256489]: 2026-01-31 04:50:18.511497588 +0000 UTC m=+0.023975000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:50:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6da2e2d8163b83bd1cfd438acee4af0e60e8877350e98eafcc1ae7059b1e43d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6da2e2d8163b83bd1cfd438acee4af0e60e8877350e98eafcc1ae7059b1e43d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6da2e2d8163b83bd1cfd438acee4af0e60e8877350e98eafcc1ae7059b1e43d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:18 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6da2e2d8163b83bd1cfd438acee4af0e60e8877350e98eafcc1ae7059b1e43d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:18 np0005603435 podman[256489]: 2026-01-31 04:50:18.630217905 +0000 UTC m=+0.142695297 container init 95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:50:18 np0005603435 podman[256489]: 2026-01-31 04:50:18.644624507 +0000 UTC m=+0.157101909 container start 95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:50:18 np0005603435 podman[256489]: 2026-01-31 04:50:18.64896763 +0000 UTC m=+0.161445022 container attach 95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bhaskara, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]: {
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:    "0": [
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:        {
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "devices": [
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "/dev/loop3"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            ],
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_name": "ceph_lv0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_size": "21470642176",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "name": "ceph_lv0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "tags": {
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cluster_name": "ceph",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.crush_device_class": "",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.encrypted": "0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.objectstore": "bluestore",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osd_id": "0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.type": "block",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.vdo": "0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.with_tpm": "0"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            },
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "type": "block",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "vg_name": "ceph_vg0"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:        }
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:    ],
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:    "1": [
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:        {
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "devices": [
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "/dev/loop4"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            ],
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_name": "ceph_lv1",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_size": "21470642176",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "name": "ceph_lv1",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "tags": {
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cluster_name": "ceph",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.crush_device_class": "",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.encrypted": "0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.objectstore": "bluestore",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osd_id": "1",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.type": "block",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.vdo": "0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.with_tpm": "0"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            },
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "type": "block",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "vg_name": "ceph_vg1"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:        }
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:    ],
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:    "2": [
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:        {
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "devices": [
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "/dev/loop5"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            ],
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_name": "ceph_lv2",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_size": "21470642176",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "name": "ceph_lv2",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "tags": {
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.cluster_name": "ceph",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.crush_device_class": "",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.encrypted": "0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.objectstore": "bluestore",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osd_id": "2",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.type": "block",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.vdo": "0",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:                "ceph.with_tpm": "0"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            },
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "type": "block",
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:            "vg_name": "ceph_vg2"
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:        }
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]:    ]
Jan 30 23:50:18 np0005603435 sleepy_bhaskara[256505]: }
Jan 30 23:50:18 np0005603435 systemd[1]: libpod-95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1.scope: Deactivated successfully.
Jan 30 23:50:18 np0005603435 podman[256489]: 2026-01-31 04:50:18.964412225 +0000 UTC m=+0.476889677 container died 95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:50:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e6da2e2d8163b83bd1cfd438acee4af0e60e8877350e98eafcc1ae7059b1e43d-merged.mount: Deactivated successfully.
Jan 30 23:50:19 np0005603435 podman[256489]: 2026-01-31 04:50:19.09738612 +0000 UTC m=+0.609863522 container remove 95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:50:19 np0005603435 systemd[1]: libpod-conmon-95d5716f87f25af7cbda319c9b2f32e3dd5698af671e4a625759d697ec8657d1.scope: Deactivated successfully.
Jan 30 23:50:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 4.6 KiB/s wr, 189 op/s
Jan 30 23:50:19 np0005603435 nova_compute[239938]: 2026-01-31 04:50:19.436 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:19 np0005603435 podman[256590]: 2026-01-31 04:50:19.612273467 +0000 UTC m=+0.059671347 container create e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:50:19 np0005603435 systemd[1]: Started libpod-conmon-e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053.scope.
Jan 30 23:50:19 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:19 np0005603435 podman[256590]: 2026-01-31 04:50:19.588134695 +0000 UTC m=+0.035532625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:50:19 np0005603435 podman[256590]: 2026-01-31 04:50:19.695313068 +0000 UTC m=+0.142710998 container init e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:50:19 np0005603435 podman[256590]: 2026-01-31 04:50:19.702746534 +0000 UTC m=+0.150144424 container start e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:50:19 np0005603435 podman[256590]: 2026-01-31 04:50:19.707114758 +0000 UTC m=+0.154512658 container attach e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:50:19 np0005603435 strange_noyce[256606]: 167 167
Jan 30 23:50:19 np0005603435 systemd[1]: libpod-e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053.scope: Deactivated successfully.
Jan 30 23:50:19 np0005603435 podman[256590]: 2026-01-31 04:50:19.710722363 +0000 UTC m=+0.158120243 container died e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:50:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1523ddda6fb0e1daa2835b939062d8d09d238762649a08f3c3944d8b88c06b05-merged.mount: Deactivated successfully.
Jan 30 23:50:19 np0005603435 podman[256590]: 2026-01-31 04:50:19.756410038 +0000 UTC m=+0.203807918 container remove e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:50:19 np0005603435 systemd[1]: libpod-conmon-e57e54482f2ddac64a27cbd394958e60252f9bde7d6ee435a6b1981f58f4e053.scope: Deactivated successfully.
Jan 30 23:50:19 np0005603435 podman[256631]: 2026-01-31 04:50:19.9520499 +0000 UTC m=+0.071391665 container create 84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_proskuriakova, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:50:20 np0005603435 systemd[1]: Started libpod-conmon-84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55.scope.
Jan 30 23:50:20 np0005603435 podman[256631]: 2026-01-31 04:50:19.911710573 +0000 UTC m=+0.031052348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:50:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ba4c342293cc282755c5e9539acfd458c0d648c74421f829cdd84699517839/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ba4c342293cc282755c5e9539acfd458c0d648c74421f829cdd84699517839/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ba4c342293cc282755c5e9539acfd458c0d648c74421f829cdd84699517839/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ba4c342293cc282755c5e9539acfd458c0d648c74421f829cdd84699517839/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:20 np0005603435 podman[256631]: 2026-01-31 04:50:20.051011118 +0000 UTC m=+0.170352943 container init 84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_proskuriakova, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 30 23:50:20 np0005603435 podman[256631]: 2026-01-31 04:50:20.06628889 +0000 UTC m=+0.185630665 container start 84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 30 23:50:20 np0005603435 podman[256631]: 2026-01-31 04:50:20.07091682 +0000 UTC m=+0.190258585 container attach 84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_proskuriakova, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:50:20 np0005603435 lvm[256727]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:50:20 np0005603435 lvm[256727]: VG ceph_vg1 finished
Jan 30 23:50:20 np0005603435 lvm[256726]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:50:20 np0005603435 lvm[256726]: VG ceph_vg0 finished
Jan 30 23:50:20 np0005603435 lvm[256729]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:50:20 np0005603435 lvm[256729]: VG ceph_vg2 finished
Jan 30 23:50:20 np0005603435 brave_proskuriakova[256648]: {}
Jan 30 23:50:20 np0005603435 systemd[1]: libpod-84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55.scope: Deactivated successfully.
Jan 30 23:50:20 np0005603435 systemd[1]: libpod-84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55.scope: Consumed 1.131s CPU time.
Jan 30 23:50:20 np0005603435 podman[256631]: 2026-01-31 04:50:20.796311093 +0000 UTC m=+0.915652858 container died 84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_proskuriakova, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:50:20 np0005603435 nova_compute[239938]: 2026-01-31 04:50:20.812 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay-93ba4c342293cc282755c5e9539acfd458c0d648c74421f829cdd84699517839-merged.mount: Deactivated successfully.
Jan 30 23:50:20 np0005603435 podman[256631]: 2026-01-31 04:50:20.844121207 +0000 UTC m=+0.963462972 container remove 84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_proskuriakova, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:50:20 np0005603435 systemd[1]: libpod-conmon-84930cb84baca7913755173e0d10c70704ad0c0282815ff6477aaa79e8847f55.scope: Deactivated successfully.
Jan 30 23:50:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:50:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:50:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 1.9 KiB/s wr, 114 op/s
Jan 30 23:50:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:21 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:21 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:50:22 np0005603435 podman[256770]: 2026-01-31 04:50:22.125662085 +0000 UTC m=+0.090904118 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:50:22 np0005603435 podman[256771]: 2026-01-31 04:50:22.145322412 +0000 UTC m=+0.105199018 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:50:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:50:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3396494181' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:50:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.7 KiB/s wr, 137 op/s
Jan 30 23:50:23 np0005603435 nova_compute[239938]: 2026-01-31 04:50:23.407 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:50:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 30 23:50:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 30 23:50:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 30 23:50:24 np0005603435 nova_compute[239938]: 2026-01-31 04:50:24.450 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 30 23:50:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 30 23:50:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 30 23:50:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 606 KiB/s rd, 1.9 KiB/s wr, 60 op/s
Jan 30 23:50:25 np0005603435 nova_compute[239938]: 2026-01-31 04:50:25.816 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:26.950971) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835026951016, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1672, "num_deletes": 274, "total_data_size": 2278208, "memory_usage": 2316224, "flush_reason": "Manual Compaction"}
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835026968522, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 2204457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23694, "largest_seqno": 25365, "table_properties": {"data_size": 2196407, "index_size": 4800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17534, "raw_average_key_size": 20, "raw_value_size": 2179895, "raw_average_value_size": 2567, "num_data_blocks": 212, "num_entries": 849, "num_filter_entries": 849, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769834935, "oldest_key_time": 1769834935, "file_creation_time": 1769835026, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 17614 microseconds, and 6769 cpu microseconds.
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:26.968589) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 2204457 bytes OK
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:26.968613) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:26.973342) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:26.973372) EVENT_LOG_v1 {"time_micros": 1769835026973363, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:26.973397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2270538, prev total WAL file size 2270538, number of live WAL files 2.
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:26.974342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353130' seq:72057594037927935, type:22 .. '6C6F676D00373638' seq:0, type:0; will stop at (end)
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(2152KB)], [53(8946KB)]
Jan 30 23:50:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835026974402, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11365608, "oldest_snapshot_seqno": -1}
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5393 keys, 11265028 bytes, temperature: kUnknown
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835027070829, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 11265028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11222181, "index_size": 28273, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 133508, "raw_average_key_size": 24, "raw_value_size": 11118392, "raw_average_value_size": 2061, "num_data_blocks": 1173, "num_entries": 5393, "num_filter_entries": 5393, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769835026, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:27.071473) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 11265028 bytes
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:27.073061) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.6 rd, 116.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 8.7 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 5951, records dropped: 558 output_compression: NoCompression
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:27.073100) EVENT_LOG_v1 {"time_micros": 1769835027073084, "job": 28, "event": "compaction_finished", "compaction_time_micros": 96629, "compaction_time_cpu_micros": 33200, "output_level": 6, "num_output_files": 1, "total_output_size": 11265028, "num_input_records": 5951, "num_output_records": 5393, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835027073767, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835027075486, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:26.974179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:27.075567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:27.075572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:27.075574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:27.075575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:50:27.075577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:50:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.7 KiB/s wr, 81 op/s
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3244139655' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3244139655' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.5 KiB/s wr, 46 op/s
Jan 30 23:50:29 np0005603435 nova_compute[239938]: 2026-01-31 04:50:29.452 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:30 np0005603435 nova_compute[239938]: 2026-01-31 04:50:30.820 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:50:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1431004371' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:50:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 61 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 75 op/s
Jan 30 23:50:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 30 23:50:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 30 23:50:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 30 23:50:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 30 23:50:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 30 23:50:32 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4079956436' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4079956436' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.4 MiB/s wr, 141 op/s
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1992818103' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1992818103' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/123350778' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/123350778' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:34 np0005603435 nova_compute[239938]: 2026-01-31 04:50:34.454 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 3.6 MiB/s wr, 167 op/s
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.495 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.496 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.511 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.784 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.785 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.795 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.796 239942 INFO nova.compute.claims [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.822 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.875 239942 DEBUG nova.scheduler.client.report [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Refreshing inventories for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.897 239942 DEBUG nova.scheduler.client.report [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Updating ProviderTree inventory for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.897 239942 DEBUG nova.compute.provider_tree [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.910 239942 DEBUG nova.scheduler.client.report [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Refreshing aggregate associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.933 239942 DEBUG nova.scheduler.client.report [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Refreshing trait associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, traits: COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 30 23:50:35 np0005603435 nova_compute[239938]: 2026-01-31 04:50:35.968 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:50:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025595965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.581 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.586 239942 DEBUG nova.compute.provider_tree [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.603 239942 DEBUG nova.scheduler.client.report [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.627 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.628 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:50:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:50:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/581936415' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.673 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.674 239942 DEBUG nova.network.neutron [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.692 239942 INFO nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.712 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.805 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.807 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.808 239942 INFO nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Creating image(s)#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.837 239942 DEBUG nova.storage.rbd_utils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.873 239942 DEBUG nova.storage.rbd_utils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.907 239942 DEBUG nova.storage.rbd_utils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.910 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.925 239942 DEBUG nova.policy [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb6c7d8ff99f43cb94670fd4096d652a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f926501f874644cf9ffda466c84e710b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:50:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 30 23:50:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 30 23:50:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:50:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:50:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:50:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:50:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:50:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:50:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.961 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.963 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.964 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.964 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.993 239942 DEBUG nova.storage.rbd_utils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:50:36 np0005603435 nova_compute[239938]: 2026-01-31 04:50:36.997 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.281 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 94 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 1.2 MiB/s wr, 242 op/s
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.335 239942 DEBUG nova.storage.rbd_utils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] resizing rbd image d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.407 239942 DEBUG nova.objects.instance [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'migration_context' on Instance uuid d99b6e7d-0d41-4261-8dc8-687109c9a0fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.422 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.422 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Ensure instance console log exists: /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.423 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.423 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.424 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:37 np0005603435 nova_compute[239938]: 2026-01-31 04:50:37.838 239942 DEBUG nova.network.neutron [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Successfully created port: 23c441d0-6579-44b1-a27f-a3856db44b73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:50:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 30 23:50:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 30 23:50:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 30 23:50:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/913296099' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/913296099' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:38 np0005603435 nova_compute[239938]: 2026-01-31 04:50:38.867 239942 DEBUG nova.network.neutron [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Successfully updated port: 23c441d0-6579-44b1-a27f-a3856db44b73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:50:38 np0005603435 nova_compute[239938]: 2026-01-31 04:50:38.903 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:50:38 np0005603435 nova_compute[239938]: 2026-01-31 04:50:38.904 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquired lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:50:38 np0005603435 nova_compute[239938]: 2026-01-31 04:50:38.904 239942 DEBUG nova.network.neutron [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.012 239942 DEBUG nova.compute.manager [req-1672a825-fce9-47e6-b555-7402233c758f req-d3d09860-d949-43f1-84ab-76951f01b201 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-changed-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.013 239942 DEBUG nova.compute.manager [req-1672a825-fce9-47e6-b555-7402233c758f req-d3d09860-d949-43f1-84ab-76951f01b201 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Refreshing instance network info cache due to event network-changed-23c441d0-6579-44b1-a27f-a3856db44b73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.013 239942 DEBUG oslo_concurrency.lockutils [req-1672a825-fce9-47e6-b555-7402233c758f req-d3d09860-d949-43f1-84ab-76951f01b201 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.069 239942 DEBUG nova.network.neutron [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:50:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 94 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 280 KiB/s wr, 185 op/s
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.456 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4263241242' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4263241242' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.816 239942 DEBUG nova.network.neutron [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating instance_info_cache with network_info: [{"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.836 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Releasing lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.837 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Instance network_info: |[{"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.837 239942 DEBUG oslo_concurrency.lockutils [req-1672a825-fce9-47e6-b555-7402233c758f req-d3d09860-d949-43f1-84ab-76951f01b201 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.838 239942 DEBUG nova.network.neutron [req-1672a825-fce9-47e6-b555-7402233c758f req-d3d09860-d949-43f1-84ab-76951f01b201 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Refreshing network info cache for port 23c441d0-6579-44b1-a27f-a3856db44b73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.842 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Start _get_guest_xml network_info=[{"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.847 239942 WARNING nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.858 239942 DEBUG nova.virt.libvirt.host [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.859 239942 DEBUG nova.virt.libvirt.host [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.862 239942 DEBUG nova.virt.libvirt.host [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.863 239942 DEBUG nova.virt.libvirt.host [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.864 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.864 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.865 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.865 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.866 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.866 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.867 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.867 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.867 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.868 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.868 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.869 239942 DEBUG nova.virt.hardware [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:50:39 np0005603435 nova_compute[239938]: 2026-01-31 04:50:39.873 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 30 23:50:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 30 23:50:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 30 23:50:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:50:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170054955' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:50:40 np0005603435 nova_compute[239938]: 2026-01-31 04:50:40.427 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.067 239942 DEBUG nova.storage.rbd_utils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.072 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.083 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 109 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 1.1 MiB/s wr, 181 op/s
Jan 30 23:50:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:50:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062457809' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.632 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.633 239942 DEBUG nova.virt.libvirt.vif [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1521065022',display_name='tempest-TestStampPattern-server-1521065022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1521065022',id=10,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE36VHJ+yy2SXlSY6zQF7e9BvcMYc8SPWqyVX2ZxItgDCfKt1gLcAFRAPVxsIPrChTqlOOAcxm0TrregMrTGHoD8jXmVh+9yf3UY3pMaZlSN/M9091Lc3gRO27izGQve6Q==',key_name='tempest-TestStampPattern-1698214235',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f926501f874644cf9ffda466c84e710b',ramdisk_id='',reservation_id='r-ao1tdelu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-567815244',owner_user_name='tempest-TestStampPattern-567815244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:50:36Z,user_data=None,user_id='bb6c7d8ff99f43cb94670fd4096d652a',uuid=d99b6e7d-0d41-4261-8dc8-687109c9a0fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.633 239942 DEBUG nova.network.os_vif_util [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converting VIF {"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.634 239942 DEBUG nova.network.os_vif_util [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:65:c1,bridge_name='br-int',has_traffic_filtering=True,id=23c441d0-6579-44b1-a27f-a3856db44b73,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23c441d0-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.635 239942 DEBUG nova.objects.instance [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'pci_devices' on Instance uuid d99b6e7d-0d41-4261-8dc8-687109c9a0fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.649 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <uuid>d99b6e7d-0d41-4261-8dc8-687109c9a0fa</uuid>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <name>instance-0000000a</name>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestStampPattern-server-1521065022</nova:name>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:50:39</nova:creationTime>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <nova:user uuid="bb6c7d8ff99f43cb94670fd4096d652a">tempest-TestStampPattern-567815244-project-member</nova:user>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <nova:project uuid="f926501f874644cf9ffda466c84e710b">tempest-TestStampPattern-567815244</nova:project>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <nova:port uuid="23c441d0-6579-44b1-a27f-a3856db44b73">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <entry name="serial">d99b6e7d-0d41-4261-8dc8-687109c9a0fa</entry>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <entry name="uuid">d99b6e7d-0d41-4261-8dc8-687109c9a0fa</entry>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk.config">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:33:65:c1"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <target dev="tap23c441d0-65"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa/console.log" append="off"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:50:41 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:50:41 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:50:41 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:50:41 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.650 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Preparing to wait for external event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.650 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.651 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.651 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.652 239942 DEBUG nova.virt.libvirt.vif [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1521065022',display_name='tempest-TestStampPattern-server-1521065022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1521065022',id=10,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE36VHJ+yy2SXlSY6zQF7e9BvcMYc8SPWqyVX2ZxItgDCfKt1gLcAFRAPVxsIPrChTqlOOAcxm0TrregMrTGHoD8jXmVh+9yf3UY3pMaZlSN/M9091Lc3gRO27izGQve6Q==',key_name='tempest-TestStampPattern-1698214235',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f926501f874644cf9ffda466c84e710b',ramdisk_id='',reservation_id='r-ao1tdelu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-567815244',owner_user_name='tempest-TestStampPattern-567815244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:50:36Z,user_data=None,user_id='bb6c7d8ff99f43cb94670fd4096d652a',uuid=d99b6e7d-0d41-4261-8dc8-687109c9a0fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.652 239942 DEBUG nova.network.os_vif_util [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converting VIF {"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.653 239942 DEBUG nova.network.os_vif_util [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:65:c1,bridge_name='br-int',has_traffic_filtering=True,id=23c441d0-6579-44b1-a27f-a3856db44b73,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23c441d0-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.653 239942 DEBUG os_vif [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:65:c1,bridge_name='br-int',has_traffic_filtering=True,id=23c441d0-6579-44b1-a27f-a3856db44b73,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23c441d0-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.654 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.654 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.655 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.659 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.659 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23c441d0-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.659 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23c441d0-65, col_values=(('external_ids', {'iface-id': '23c441d0-6579-44b1-a27f-a3856db44b73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:65:c1', 'vm-uuid': 'd99b6e7d-0d41-4261-8dc8-687109c9a0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.661 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:41 np0005603435 NetworkManager[49097]: <info>  [1769835041.6629] manager: (tap23c441d0-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.663 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.669 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.669 239942 INFO os_vif [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:65:c1,bridge_name='br-int',has_traffic_filtering=True,id=23c441d0-6579-44b1-a27f-a3856db44b73,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23c441d0-65')#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.727 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.728 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.729 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No VIF found with MAC fa:16:3e:33:65:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.730 239942 INFO nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Using config drive#033[00m
Jan 30 23:50:41 np0005603435 nova_compute[239938]: 2026-01-31 04:50:41.764 239942 DEBUG nova.storage.rbd_utils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:50:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 30 23:50:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 30 23:50:42 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 30 23:50:42 np0005603435 nova_compute[239938]: 2026-01-31 04:50:42.613 239942 INFO nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Creating config drive at /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa/disk.config#033[00m
Jan 30 23:50:42 np0005603435 nova_compute[239938]: 2026-01-31 04:50:42.617 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpszgx5kg1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:42 np0005603435 nova_compute[239938]: 2026-01-31 04:50:42.750 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpszgx5kg1" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:42 np0005603435 nova_compute[239938]: 2026-01-31 04:50:42.778 239942 DEBUG nova.storage.rbd_utils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:50:42 np0005603435 nova_compute[239938]: 2026-01-31 04:50:42.784 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa/disk.config d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:50:42 np0005603435 nova_compute[239938]: 2026-01-31 04:50:42.909 239942 DEBUG nova.network.neutron [req-1672a825-fce9-47e6-b555-7402233c758f req-d3d09860-d949-43f1-84ab-76951f01b201 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updated VIF entry in instance network info cache for port 23c441d0-6579-44b1-a27f-a3856db44b73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:50:42 np0005603435 nova_compute[239938]: 2026-01-31 04:50:42.910 239942 DEBUG nova.network.neutron [req-1672a825-fce9-47e6-b555-7402233c758f req-d3d09860-d949-43f1-84ab-76951f01b201 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating instance_info_cache with network_info: [{"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:50:42 np0005603435 nova_compute[239938]: 2026-01-31 04:50:42.933 239942 DEBUG oslo_concurrency.lockutils [req-1672a825-fce9-47e6-b555-7402233c758f req-d3d09860-d949-43f1-84ab-76951f01b201 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:50:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2346029764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2346029764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:50:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 134 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 3.3 MiB/s wr, 171 op/s
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.559 239942 DEBUG oslo_concurrency.processutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa/disk.config d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.776s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.560 239942 INFO nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Deleting local config drive /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa/disk.config because it was imported into RBD.#033[00m
Jan 30 23:50:43 np0005603435 kernel: tap23c441d0-65: entered promiscuous mode
Jan 30 23:50:43 np0005603435 NetworkManager[49097]: <info>  [1769835043.6046] manager: (tap23c441d0-65): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Jan 30 23:50:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:43Z|00097|binding|INFO|Claiming lport 23c441d0-6579-44b1-a27f-a3856db44b73 for this chassis.
Jan 30 23:50:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:43Z|00098|binding|INFO|23c441d0-6579-44b1-a27f-a3856db44b73: Claiming fa:16:3e:33:65:c1 10.100.0.5
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.605 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.609 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.612 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.620 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:65:c1 10.100.0.5'], port_security=['fa:16:3e:33:65:c1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd99b6e7d-0d41-4261-8dc8-687109c9a0fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f926501f874644cf9ffda466c84e710b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05382d3e-edd0-4646-aff2-95f9f0df0d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e385d9e-e365-4760-9c59-b6cbbb99eaf1, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=23c441d0-6579-44b1-a27f-a3856db44b73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.621 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 23c441d0-6579-44b1-a27f-a3856db44b73 in datapath 55d16559-9723-4f0a-a23e-90d04ca1bb05 bound to our chassis#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.622 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55d16559-9723-4f0a-a23e-90d04ca1bb05#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.632 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[766963ae-eece-4445-addc-51f24ae4ddd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.633 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55d16559-91 in ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.635 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55d16559-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.635 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4ef0c5-c58c-4665-b0e3-3ca5debd9eb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.635 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d922e95b-5be5-459b-b3cb-3797d87a236b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 systemd-udevd[257141]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:50:43 np0005603435 systemd-machined[208030]: New machine qemu-10-instance-0000000a.
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.646 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 NetworkManager[49097]: <info>  [1769835043.6484] device (tap23c441d0-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:50:43 np0005603435 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 30 23:50:43 np0005603435 NetworkManager[49097]: <info>  [1769835043.6490] device (tap23c441d0-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.649 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[0912bbaf-6f56-4940-8c62-214e25e48409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:43Z|00099|binding|INFO|Setting lport 23c441d0-6579-44b1-a27f-a3856db44b73 ovn-installed in OVS
Jan 30 23:50:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:43Z|00100|binding|INFO|Setting lport 23c441d0-6579-44b1-a27f-a3856db44b73 up in Southbound
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.651 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.661 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[03a9c170-0be0-4a53-8755-e2153dccb64f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.688 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e571bd-60c2-4878-b44b-badff28ec83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 NetworkManager[49097]: <info>  [1769835043.7010] manager: (tap55d16559-90): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.700 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[81487e64-5215-407c-9ab7-f720470e558b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 systemd-udevd[257145]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.724 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6c1879-824d-4a99-b34d-02fb7a6d4eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.728 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb2b196-7049-46e1-81b6-c8da710e19c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 NetworkManager[49097]: <info>  [1769835043.7476] device (tap55d16559-90): carrier: link connected
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.751 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[1281f8aa-e603-450c-b696-e931a6c93d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.768 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[32db4590-1145-452d-9878-5b5a880b1df4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55d16559-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:06:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409814, 'reachable_time': 30203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257174, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.783 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[273cf94e-eb1d-40fb-820d-9b867a7f9a84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:64d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409814, 'tstamp': 409814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257175, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.800 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8c931d4b-76ef-4db7-af38-1d7af7ee627c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55d16559-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:06:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409814, 'reachable_time': 30203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257176, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.830 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b3576776-4aa4-403a-a885-5320976ebcff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.892 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[458b7816-1f1b-4fe6-a601-8674ec9dacea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.893 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d16559-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.894 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.894 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55d16559-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.896 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 kernel: tap55d16559-90: entered promiscuous mode
Jan 30 23:50:43 np0005603435 NetworkManager[49097]: <info>  [1769835043.8980] manager: (tap55d16559-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.902 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.903 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55d16559-90, col_values=(('external_ids', {'iface-id': 'e2b210b0-d66c-49f0-beb5-0ac736a943c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.904 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:43Z|00101|binding|INFO|Releasing lport e2b210b0-d66c-49f0-beb5-0ac736a943c4 from this chassis (sb_readonly=0)
Jan 30 23:50:43 np0005603435 nova_compute[239938]: 2026-01-31 04:50:43.918 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.920 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55d16559-9723-4f0a-a23e-90d04ca1bb05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55d16559-9723-4f0a-a23e-90d04ca1bb05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.921 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[372ce4fa-0635-4686-a0f5-6d9d3bd64f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.922 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-55d16559-9723-4f0a-a23e-90d04ca1bb05
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/55d16559-9723-4f0a-a23e-90d04ca1bb05.pid.haproxy
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 55d16559-9723-4f0a-a23e-90d04ca1bb05
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:50:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:43.922 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'env', 'PROCESS_TAG=haproxy-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55d16559-9723-4f0a-a23e-90d04ca1bb05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.070 239942 DEBUG nova.compute.manager [req-b07d61d1-af52-4d2b-8346-33b9ef8a6560 req-a947e121-7b28-47f7-b2d8-46b3bc8076f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.071 239942 DEBUG oslo_concurrency.lockutils [req-b07d61d1-af52-4d2b-8346-33b9ef8a6560 req-a947e121-7b28-47f7-b2d8-46b3bc8076f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.071 239942 DEBUG oslo_concurrency.lockutils [req-b07d61d1-af52-4d2b-8346-33b9ef8a6560 req-a947e121-7b28-47f7-b2d8-46b3bc8076f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.071 239942 DEBUG oslo_concurrency.lockutils [req-b07d61d1-af52-4d2b-8346-33b9ef8a6560 req-a947e121-7b28-47f7-b2d8-46b3bc8076f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.071 239942 DEBUG nova.compute.manager [req-b07d61d1-af52-4d2b-8346-33b9ef8a6560 req-a947e121-7b28-47f7-b2d8-46b3bc8076f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Processing event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.242 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.245 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835044.2436874, d99b6e7d-0d41-4261-8dc8-687109c9a0fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.246 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] VM Started (Lifecycle Event)#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.249 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.254 239942 INFO nova.virt.libvirt.driver [-] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Instance spawned successfully.#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.255 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.271 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.279 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.284 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.285 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.286 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.287 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.288 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.289 239942 DEBUG nova.virt.libvirt.driver [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.301 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.302 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835044.2438285, d99b6e7d-0d41-4261-8dc8-687109c9a0fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.302 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.322 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.327 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835044.2469878, d99b6e7d-0d41-4261-8dc8-687109c9a0fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.327 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.345 239942 INFO nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Took 7.54 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.345 239942 DEBUG nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.353 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.356 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:50:44 np0005603435 podman[257250]: 2026-01-31 04:50:44.275492689 +0000 UTC m=+0.029298566 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.381 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.426 239942 INFO nova.compute.manager [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Took 8.86 seconds to build instance.#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.451 239942 DEBUG oslo_concurrency.lockutils [None req-743824e5-9693-4461-8763-5982685ed09b bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:44 np0005603435 nova_compute[239938]: 2026-01-31 04:50:44.458 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 30 23:50:45 np0005603435 podman[257250]: 2026-01-31 04:50:45.095697101 +0000 UTC m=+0.849502918 container create 3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:50:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 134 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 2.7 MiB/s wr, 168 op/s
Jan 30 23:50:45 np0005603435 systemd[1]: Started libpod-conmon-3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a.scope.
Jan 30 23:50:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 30 23:50:45 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:50:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 30 23:50:45 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1074f65d3f52a38b156a3f7633b5bdcb82ed492f07d9ebe71c731c6826f9f27d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:50:45 np0005603435 podman[257250]: 2026-01-31 04:50:45.582465041 +0000 UTC m=+1.336270838 container init 3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 30 23:50:45 np0005603435 podman[257250]: 2026-01-31 04:50:45.590769268 +0000 UTC m=+1.344575045 container start 3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:50:45 np0005603435 neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05[257265]: [NOTICE]   (257270) : New worker (257272) forked
Jan 30 23:50:45 np0005603435 neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05[257265]: [NOTICE]   (257270) : Loading success.
Jan 30 23:50:46 np0005603435 nova_compute[239938]: 2026-01-31 04:50:46.168 239942 DEBUG nova.compute.manager [req-32e1deb0-12ff-4d0e-89b1-b42aeb0a9aaf req-c1204a3a-f6f7-42b7-9215-b16aaffdbd22 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:46 np0005603435 nova_compute[239938]: 2026-01-31 04:50:46.170 239942 DEBUG oslo_concurrency.lockutils [req-32e1deb0-12ff-4d0e-89b1-b42aeb0a9aaf req-c1204a3a-f6f7-42b7-9215-b16aaffdbd22 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:46 np0005603435 nova_compute[239938]: 2026-01-31 04:50:46.170 239942 DEBUG oslo_concurrency.lockutils [req-32e1deb0-12ff-4d0e-89b1-b42aeb0a9aaf req-c1204a3a-f6f7-42b7-9215-b16aaffdbd22 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:46 np0005603435 nova_compute[239938]: 2026-01-31 04:50:46.171 239942 DEBUG oslo_concurrency.lockutils [req-32e1deb0-12ff-4d0e-89b1-b42aeb0a9aaf req-c1204a3a-f6f7-42b7-9215-b16aaffdbd22 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:46 np0005603435 nova_compute[239938]: 2026-01-31 04:50:46.172 239942 DEBUG nova.compute.manager [req-32e1deb0-12ff-4d0e-89b1-b42aeb0a9aaf req-c1204a3a-f6f7-42b7-9215-b16aaffdbd22 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] No waiting events found dispatching network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:50:46 np0005603435 nova_compute[239938]: 2026-01-31 04:50:46.172 239942 WARNING nova.compute.manager [req-32e1deb0-12ff-4d0e-89b1-b42aeb0a9aaf req-c1204a3a-f6f7-42b7-9215-b16aaffdbd22 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received unexpected event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:50:46 np0005603435 nova_compute[239938]: 2026-01-31 04:50:46.662 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 30 23:50:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 30 23:50:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 30 23:50:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 134 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 300 op/s
Jan 30 23:50:48 np0005603435 NetworkManager[49097]: <info>  [1769835048.1351] manager: (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Jan 30 23:50:48 np0005603435 NetworkManager[49097]: <info>  [1769835048.1359] manager: (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Jan 30 23:50:48 np0005603435 nova_compute[239938]: 2026-01-31 04:50:48.134 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:48 np0005603435 nova_compute[239938]: 2026-01-31 04:50:48.188 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:48 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:48Z|00102|binding|INFO|Releasing lport e2b210b0-d66c-49f0-beb5-0ac736a943c4 from this chassis (sb_readonly=0)
Jan 30 23:50:48 np0005603435 nova_compute[239938]: 2026-01-31 04:50:48.205 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:48 np0005603435 nova_compute[239938]: 2026-01-31 04:50:48.741 239942 DEBUG nova.compute.manager [req-efac831a-840f-4377-8beb-01509e66e245 req-a035c9f6-c735-4a35-be15-dba8b4eec338 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-changed-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:50:48 np0005603435 nova_compute[239938]: 2026-01-31 04:50:48.742 239942 DEBUG nova.compute.manager [req-efac831a-840f-4377-8beb-01509e66e245 req-a035c9f6-c735-4a35-be15-dba8b4eec338 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Refreshing instance network info cache due to event network-changed-23c441d0-6579-44b1-a27f-a3856db44b73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:50:48 np0005603435 nova_compute[239938]: 2026-01-31 04:50:48.742 239942 DEBUG oslo_concurrency.lockutils [req-efac831a-840f-4377-8beb-01509e66e245 req-a035c9f6-c735-4a35-be15-dba8b4eec338 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:50:48 np0005603435 nova_compute[239938]: 2026-01-31 04:50:48.743 239942 DEBUG oslo_concurrency.lockutils [req-efac831a-840f-4377-8beb-01509e66e245 req-a035c9f6-c735-4a35-be15-dba8b4eec338 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:50:48 np0005603435 nova_compute[239938]: 2026-01-31 04:50:48.743 239942 DEBUG nova.network.neutron [req-efac831a-840f-4377-8beb-01509e66e245 req-a035c9f6-c735-4a35-be15-dba8b4eec338 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Refreshing network info cache for port 23c441d0-6579-44b1-a27f-a3856db44b73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:50:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 134 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.3 MiB/s wr, 211 op/s
Jan 30 23:50:49 np0005603435 nova_compute[239938]: 2026-01-31 04:50:49.460 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:49 np0005603435 nova_compute[239938]: 2026-01-31 04:50:49.834 239942 DEBUG nova.network.neutron [req-efac831a-840f-4377-8beb-01509e66e245 req-a035c9f6-c735-4a35-be15-dba8b4eec338 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updated VIF entry in instance network info cache for port 23c441d0-6579-44b1-a27f-a3856db44b73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:50:49 np0005603435 nova_compute[239938]: 2026-01-31 04:50:49.834 239942 DEBUG nova.network.neutron [req-efac831a-840f-4377-8beb-01509e66e245 req-a035c9f6-c735-4a35-be15-dba8b4eec338 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating instance_info_cache with network_info: [{"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:50:49 np0005603435 nova_compute[239938]: 2026-01-31 04:50:49.859 239942 DEBUG oslo_concurrency.lockutils [req-efac831a-840f-4377-8beb-01509e66e245 req-a035c9f6-c735-4a35-be15-dba8b4eec338 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:50:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 30 23:50:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 30 23:50:50 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 30 23:50:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 134 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 26 KiB/s wr, 159 op/s
Jan 30 23:50:51 np0005603435 nova_compute[239938]: 2026-01-31 04:50:51.665 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:53 np0005603435 podman[257282]: 2026-01-31 04:50:53.093179976 +0000 UTC m=+0.057401843 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 30 23:50:53 np0005603435 podman[257283]: 2026-01-31 04:50:53.169641711 +0000 UTC m=+0.130666762 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:50:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 134 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 21 KiB/s wr, 158 op/s
Jan 30 23:50:54 np0005603435 nova_compute[239938]: 2026-01-31 04:50:54.494 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 30 23:50:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 30 23:50:55 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 30 23:50:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 145 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 971 KiB/s wr, 81 op/s
Jan 30 23:50:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:55.916 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:50:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:55.917 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:50:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:50:55.917 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:50:56 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:56Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:65:c1 10.100.0.5
Jan 30 23:50:56 np0005603435 ovn_controller[145670]: 2026-01-31T04:50:56Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:65:c1 10.100.0.5
Jan 30 23:50:56 np0005603435 nova_compute[239938]: 2026-01-31 04:50:56.668 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:50:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 30 23:50:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 206 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.7 MiB/s wr, 204 op/s
Jan 30 23:50:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 30 23:50:57 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 30 23:50:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 30 23:50:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 30 23:50:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 30 23:50:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 206 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 7.6 MiB/s wr, 201 op/s
Jan 30 23:50:59 np0005603435 nova_compute[239938]: 2026-01-31 04:50:59.525 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:50:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:50:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393672684' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:50:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:50:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393672684' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 30 23:51:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 30 23:51:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 30 23:51:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2248104014' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2248104014' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 203 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 482 KiB/s rd, 6.5 MiB/s wr, 200 op/s
Jan 30 23:51:01 np0005603435 nova_compute[239938]: 2026-01-31 04:51:01.670 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 30 23:51:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 30 23:51:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 30 23:51:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 30 23:51:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 30 23:51:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 30 23:51:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 167 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 261 KiB/s wr, 176 op/s
Jan 30 23:51:03 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:03.564 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:51:03 np0005603435 nova_compute[239938]: 2026-01-31 04:51:03.564 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:03 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:03.566 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:51:04 np0005603435 nova_compute[239938]: 2026-01-31 04:51:04.528 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.120 239942 DEBUG oslo_concurrency.lockutils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.120 239942 DEBUG oslo_concurrency.lockutils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.161 239942 DEBUG nova.objects.instance [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'flavor' on Instance uuid d99b6e7d-0d41-4261-8dc8-687109c9a0fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.208 239942 DEBUG oslo_concurrency.lockutils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 167 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 210 KiB/s wr, 126 op/s
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.502 239942 DEBUG oslo_concurrency.lockutils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.503 239942 DEBUG oslo_concurrency.lockutils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.504 239942 INFO nova.compute.manager [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Attaching volume 9191a76a-3f56-43d4-8eca-64e27ffc00c7 to /dev/vdb#033[00m
Jan 30 23:51:05 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:05.568 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.659 239942 DEBUG os_brick.utils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.661 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.674 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.674 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[1693e6b5-8774-427b-a28a-f7c77b569175]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.676 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.684 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.684 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[d56592a8-f447-43ad-8e90-8731745fa9e7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.686 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.696 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.696 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[019827c4-74af-4bd8-b82f-ff6a2ba84a84]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.698 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[fba50163-2d5d-46fc-b1c7-ce66cf45cadc]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.699 239942 DEBUG oslo_concurrency.processutils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.718 239942 DEBUG oslo_concurrency.processutils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.721 239942 DEBUG os_brick.initiator.connectors.lightos [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.721 239942 DEBUG os_brick.initiator.connectors.lightos [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.722 239942 DEBUG os_brick.initiator.connectors.lightos [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.722 239942 DEBUG os_brick.utils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:51:05 np0005603435 nova_compute[239938]: 2026-01-31 04:51:05.723 239942 DEBUG nova.virt.block_device [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating existing volume attachment record: 62c0e810-8fe4-46d1-9093-172acd3723cf _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:51:06
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log']
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:51:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:51:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4167694291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:51:06 np0005603435 nova_compute[239938]: 2026-01-31 04:51:06.614 239942 DEBUG nova.objects.instance [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'flavor' on Instance uuid d99b6e7d-0d41-4261-8dc8-687109c9a0fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:51:06 np0005603435 nova_compute[239938]: 2026-01-31 04:51:06.642 239942 DEBUG nova.virt.libvirt.driver [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Attempting to attach volume 9191a76a-3f56-43d4-8eca-64e27ffc00c7 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:51:06 np0005603435 nova_compute[239938]: 2026-01-31 04:51:06.645 239942 DEBUG nova.virt.libvirt.guest [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:51:06 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:51:06 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-9191a76a-3f56-43d4-8eca-64e27ffc00c7">
Jan 30 23:51:06 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:51:06 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:51:06 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:51:06 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:51:06 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:51:06 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:51:06 np0005603435 nova_compute[239938]:  <serial>9191a76a-3f56-43d4-8eca-64e27ffc00c7</serial>
Jan 30 23:51:06 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:51:06 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:51:06 np0005603435 nova_compute[239938]: 2026-01-31 04:51:06.672 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 30 23:51:06 np0005603435 nova_compute[239938]: 2026-01-31 04:51:06.939 239942 DEBUG nova.virt.libvirt.driver [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:51:06 np0005603435 nova_compute[239938]: 2026-01-31 04:51:06.940 239942 DEBUG nova.virt.libvirt.driver [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:51:06 np0005603435 nova_compute[239938]: 2026-01-31 04:51:06.940 239942 DEBUG nova.virt.libvirt.driver [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:51:06 np0005603435 nova_compute[239938]: 2026-01-31 04:51:06.941 239942 DEBUG nova.virt.libvirt.driver [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No VIF found with MAC fa:16:3e:33:65:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:51:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:51:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 30 23:51:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 30 23:51:07 np0005603435 nova_compute[239938]: 2026-01-31 04:51:07.207 239942 DEBUG oslo_concurrency.lockutils [None req-8a38836f-bbcf-4c9d-8978-46c86589ae4a bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 167 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 39 KiB/s wr, 128 op/s
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:51:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:51:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 30 23:51:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 30 23:51:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 30 23:51:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 167 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 23 KiB/s wr, 35 op/s
Jan 30 23:51:09 np0005603435 nova_compute[239938]: 2026-01-31 04:51:09.529 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3248493859' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3248493859' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:10 np0005603435 nova_compute[239938]: 2026-01-31 04:51:10.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:10 np0005603435 nova_compute[239938]: 2026-01-31 04:51:10.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 30 23:51:10 np0005603435 nova_compute[239938]: 2026-01-31 04:51:10.892 239942 DEBUG oslo_concurrency.lockutils [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:10 np0005603435 nova_compute[239938]: 2026-01-31 04:51:10.893 239942 DEBUG oslo_concurrency.lockutils [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:10 np0005603435 nova_compute[239938]: 2026-01-31 04:51:10.937 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.053 239942 INFO nova.compute.manager [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Detaching volume 9191a76a-3f56-43d4-8eca-64e27ffc00c7#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.226 239942 INFO nova.virt.block_device [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Attempting to driver detach volume 9191a76a-3f56-43d4-8eca-64e27ffc00c7 from mountpoint /dev/vdb#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.237 239942 DEBUG nova.virt.libvirt.driver [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Attempting to detach device vdb from instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.237 239942 DEBUG nova.virt.libvirt.guest [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-9191a76a-3f56-43d4-8eca-64e27ffc00c7">
Jan 30 23:51:11 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <serial>9191a76a-3f56-43d4-8eca-64e27ffc00c7</serial>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:51:11 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:51:11 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.244 239942 INFO nova.virt.libvirt.driver [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Successfully detached device vdb from instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa from the persistent domain config.#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.244 239942 DEBUG nova.virt.libvirt.driver [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.245 239942 DEBUG nova.virt.libvirt.guest [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-9191a76a-3f56-43d4-8eca-64e27ffc00c7">
Jan 30 23:51:11 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <serial>9191a76a-3f56-43d4-8eca-64e27ffc00c7</serial>
Jan 30 23:51:11 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:51:11 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:51:11 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:51:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 167 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 50 KiB/s wr, 36 op/s
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.345 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835071.3450518, d99b6e7d-0d41-4261-8dc8-687109c9a0fa => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.346 239942 DEBUG nova.virt.libvirt.driver [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.349 239942 INFO nova.virt.libvirt.driver [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Successfully detached device vdb from instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa from the live domain config.#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.674 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:11 np0005603435 nova_compute[239938]: 2026-01-31 04:51:11.691 239942 DEBUG nova.objects.instance [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'flavor' on Instance uuid d99b6e7d-0d41-4261-8dc8-687109c9a0fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:51:12 np0005603435 nova_compute[239938]: 2026-01-31 04:51:12.089 239942 DEBUG oslo_concurrency.lockutils [None req-5e82047e-8d7d-443a-8a2d-af546fd1c6f8 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:12 np0005603435 nova_compute[239938]: 2026-01-31 04:51:12.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 169 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 291 KiB/s wr, 64 op/s
Jan 30 23:51:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 30 23:51:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 30 23:51:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 30 23:51:13 np0005603435 nova_compute[239938]: 2026-01-31 04:51:13.903 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:14 np0005603435 nova_compute[239938]: 2026-01-31 04:51:14.476 239942 DEBUG nova.compute.manager [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:51:14 np0005603435 nova_compute[239938]: 2026-01-31 04:51:14.517 239942 INFO nova.compute.manager [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] instance snapshotting#033[00m
Jan 30 23:51:14 np0005603435 nova_compute[239938]: 2026-01-31 04:51:14.535 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:14 np0005603435 nova_compute[239938]: 2026-01-31 04:51:14.724 239942 INFO nova.virt.libvirt.driver [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Beginning live snapshot process#033[00m
Jan 30 23:51:14 np0005603435 nova_compute[239938]: 2026-01-31 04:51:14.871 239942 DEBUG nova.virt.libvirt.imagebackend [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No parent info for bf004ad8-fb70-4caa-9170-9f02e22d687d; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 30 23:51:14 np0005603435 nova_compute[239938]: 2026-01-31 04:51:14.882 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:14 np0005603435 nova_compute[239938]: 2026-01-31 04:51:14.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:15 np0005603435 nova_compute[239938]: 2026-01-31 04:51:15.072 239942 DEBUG nova.storage.rbd_utils [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] creating snapshot(c044673435194a9580a48a26c66f8fbe) on rbd image(d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 30 23:51:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 169 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 292 KiB/s wr, 38 op/s
Jan 30 23:51:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 30 23:51:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 30 23:51:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 30 23:51:15 np0005603435 nova_compute[239938]: 2026-01-31 04:51:15.849 239942 DEBUG nova.storage.rbd_utils [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] cloning vms/d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk@c044673435194a9580a48a26c66f8fbe to images/cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 30 23:51:15 np0005603435 nova_compute[239938]: 2026-01-31 04:51:15.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:15 np0005603435 nova_compute[239938]: 2026-01-31 04:51:15.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:15 np0005603435 nova_compute[239938]: 2026-01-31 04:51:15.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.163 239942 DEBUG nova.storage.rbd_utils [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] flattening images/cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.666 239942 DEBUG nova.storage.rbd_utils [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] removing snapshot(c044673435194a9580a48a26c66f8fbe) on rbd image(d99b6e7d-0d41-4261-8dc8-687109c9a0fa_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.717 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 30 23:51:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 30 23:51:16 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.824 239942 DEBUG nova.storage.rbd_utils [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] creating snapshot(snap) on rbd image(cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.889 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.889 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.909 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.909 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.909 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 30 23:51:16 np0005603435 nova_compute[239938]: 2026-01-31 04:51:16.910 239942 DEBUG nova.objects.instance [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d99b6e7d-0d41-4261-8dc8-687109c9a0fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:51:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1715051814' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1715051814' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007647736786538489 of space, bias 1.0, pg target 0.22943210359615468 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003847265923528565 of space, bias 1.0, pg target 0.11541797770585695 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 7.302063846755805e-07 of space, bias 1.0, pg target 0.00021906191540267418 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0007880212366872058 of space, bias 1.0, pg target 0.23640637100616174 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.151625868537785e-07 of space, bias 4.0, pg target 0.0008581951042245341 quantized to 16 (current 16)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:51:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 194 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 148 op/s
Jan 30 23:51:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 30 23:51:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 30 23:51:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 30 23:51:18 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:18Z|00103|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.141 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating instance_info_cache with network_info: [{"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.163 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.163 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.164 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3266630418' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3266630418' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.908 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.908 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.909 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.909 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:51:18 np0005603435 nova_compute[239938]: 2026-01-31 04:51:18.909 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2813868909' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2813868909' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 194 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 30 23:51:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:51:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004728696' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.482 239942 INFO nova.virt.libvirt.driver [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Snapshot image upload complete#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.483 239942 INFO nova.compute.manager [None req-ad086362-dae2-46cc-91eb-bd15521e245e bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Took 4.96 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.494 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.535 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.573 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.574 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.738 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.739 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4311MB free_disk=59.94240085966885GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.739 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.739 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.848 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.848 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.849 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:51:19 np0005603435 nova_compute[239938]: 2026-01-31 04:51:19.895 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:51:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508914366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:51:20 np0005603435 nova_compute[239938]: 2026-01-31 04:51:20.437 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:20 np0005603435 nova_compute[239938]: 2026-01-31 04:51:20.445 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:51:20 np0005603435 nova_compute[239938]: 2026-01-31 04:51:20.461 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:51:20 np0005603435 nova_compute[239938]: 2026-01-31 04:51:20.491 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:51:20 np0005603435 nova_compute[239938]: 2026-01-31 04:51:20.491 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:20 np0005603435 nova_compute[239938]: 2026-01-31 04:51:20.492 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:20 np0005603435 nova_compute[239938]: 2026-01-31 04:51:20.493 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 30 23:51:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 235 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 6.4 MiB/s wr, 176 op/s
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:51:21 np0005603435 nova_compute[239938]: 2026-01-31 04:51:21.719 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:51:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:51:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 30 23:51:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 30 23:51:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 30 23:51:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:51:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:51:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:51:22 np0005603435 podman[257687]: 2026-01-31 04:51:22.166904116 +0000 UTC m=+0.097853778 container create 96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kare, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:51:22 np0005603435 podman[257687]: 2026-01-31 04:51:22.09756754 +0000 UTC m=+0.028517222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:51:22 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 30 23:51:22 np0005603435 systemd[1]: Started libpod-conmon-96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592.scope.
Jan 30 23:51:22 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:51:22 np0005603435 podman[257687]: 2026-01-31 04:51:22.304503251 +0000 UTC m=+0.235452993 container init 96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:51:22 np0005603435 podman[257687]: 2026-01-31 04:51:22.310581361 +0000 UTC m=+0.241531033 container start 96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:51:22 np0005603435 podman[257687]: 2026-01-31 04:51:22.314540338 +0000 UTC m=+0.245490050 container attach 96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:51:22 np0005603435 happy_kare[257704]: 167 167
Jan 30 23:51:22 np0005603435 systemd[1]: libpod-96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592.scope: Deactivated successfully.
Jan 30 23:51:22 np0005603435 podman[257687]: 2026-01-31 04:51:22.317743327 +0000 UTC m=+0.248693009 container died 96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:51:22 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4f34673906f131a80d80b33dbf41076d307277e21ddf7d8a2aff37402eef027c-merged.mount: Deactivated successfully.
Jan 30 23:51:22 np0005603435 podman[257687]: 2026-01-31 04:51:22.371674624 +0000 UTC m=+0.302624306 container remove 96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kare, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 30 23:51:22 np0005603435 systemd[1]: libpod-conmon-96129ee102994dc0ac914aa05fb1b6660ed780f1e2a633a32ff5980d4380b592.scope: Deactivated successfully.
Jan 30 23:51:22 np0005603435 podman[257726]: 2026-01-31 04:51:22.502879322 +0000 UTC m=+0.037184886 container create 57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lederberg, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:51:22 np0005603435 systemd[1]: Started libpod-conmon-57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40.scope.
Jan 30 23:51:22 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:51:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fde53c1f07a35590a560b521804ada2640e5369163e2922c963b85a975750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fde53c1f07a35590a560b521804ada2640e5369163e2922c963b85a975750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fde53c1f07a35590a560b521804ada2640e5369163e2922c963b85a975750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fde53c1f07a35590a560b521804ada2640e5369163e2922c963b85a975750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:22 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fde53c1f07a35590a560b521804ada2640e5369163e2922c963b85a975750/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:22 np0005603435 nova_compute[239938]: 2026-01-31 04:51:22.561 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:22 np0005603435 nova_compute[239938]: 2026-01-31 04:51:22.563 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:22 np0005603435 podman[257726]: 2026-01-31 04:51:22.485067124 +0000 UTC m=+0.019372708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:51:22 np0005603435 podman[257726]: 2026-01-31 04:51:22.580993684 +0000 UTC m=+0.115299268 container init 57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lederberg, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 30 23:51:22 np0005603435 nova_compute[239938]: 2026-01-31 04:51:22.581 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:51:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:22 np0005603435 podman[257726]: 2026-01-31 04:51:22.594458595 +0000 UTC m=+0.128764139 container start 57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lederberg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 30 23:51:22 np0005603435 podman[257726]: 2026-01-31 04:51:22.598647788 +0000 UTC m=+0.132953392 container attach 57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:51:22 np0005603435 nova_compute[239938]: 2026-01-31 04:51:22.642 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:22 np0005603435 nova_compute[239938]: 2026-01-31 04:51:22.643 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:22 np0005603435 nova_compute[239938]: 2026-01-31 04:51:22.650 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:51:22 np0005603435 nova_compute[239938]: 2026-01-31 04:51:22.651 239942 INFO nova.compute.claims [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:51:22 np0005603435 nova_compute[239938]: 2026-01-31 04:51:22.752 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:23 np0005603435 quirky_lederberg[257743]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:51:23 np0005603435 quirky_lederberg[257743]: --> All data devices are unavailable
Jan 30 23:51:23 np0005603435 systemd[1]: libpod-57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40.scope: Deactivated successfully.
Jan 30 23:51:23 np0005603435 podman[257726]: 2026-01-31 04:51:23.051211421 +0000 UTC m=+0.585516965 container died 57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 30 23:51:23 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0c0fde53c1f07a35590a560b521804ada2640e5369163e2922c963b85a975750-merged.mount: Deactivated successfully.
Jan 30 23:51:23 np0005603435 podman[257726]: 2026-01-31 04:51:23.091457991 +0000 UTC m=+0.625763535 container remove 57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 30 23:51:23 np0005603435 systemd[1]: libpod-conmon-57070318a03ec0233ca40586be7005a7cb307c6f9ac8618b4ed7e81095579a40.scope: Deactivated successfully.
Jan 30 23:51:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 30 23:51:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 30 23:51:23 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 30 23:51:23 np0005603435 podman[257795]: 2026-01-31 04:51:23.241876712 +0000 UTC m=+0.085390742 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:51:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:51:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023387404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.285 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:23 np0005603435 podman[257838]: 2026-01-31 04:51:23.292863176 +0000 UTC m=+0.067725697 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.292 239942 DEBUG nova.compute.provider_tree [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.322 239942 DEBUG nova.scheduler.client.report [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:51:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.8 MiB/s wr, 221 op/s
Jan 30 23:51:23 np0005603435 podman[257904]: 2026-01-31 04:51:23.5478622 +0000 UTC m=+0.048148546 container create bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_tesla, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.560 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.561 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:51:23 np0005603435 systemd[1]: Started libpod-conmon-bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497.scope.
Jan 30 23:51:23 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:51:23 np0005603435 podman[257904]: 2026-01-31 04:51:23.604475743 +0000 UTC m=+0.104762059 container init bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_tesla, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:51:23 np0005603435 podman[257904]: 2026-01-31 04:51:23.611060244 +0000 UTC m=+0.111346580 container start bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_tesla, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 30 23:51:23 np0005603435 podman[257904]: 2026-01-31 04:51:23.614804937 +0000 UTC m=+0.115091253 container attach bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_tesla, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:51:23 np0005603435 vigorous_tesla[257920]: 167 167
Jan 30 23:51:23 np0005603435 systemd[1]: libpod-bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497.scope: Deactivated successfully.
Jan 30 23:51:23 np0005603435 podman[257904]: 2026-01-31 04:51:23.616840627 +0000 UTC m=+0.117126943 container died bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:51:23 np0005603435 podman[257904]: 2026-01-31 04:51:23.528899763 +0000 UTC m=+0.029186089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:51:23 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9e644f683f83d21cefc61025f196045decd58a98f5cb05d2ebfdfd0c815041e2-merged.mount: Deactivated successfully.
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.655 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.656 239942 DEBUG nova.network.neutron [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:51:23 np0005603435 podman[257904]: 2026-01-31 04:51:23.667650097 +0000 UTC m=+0.167936433 container remove bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_tesla, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.679 239942 INFO nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:51:23 np0005603435 systemd[1]: libpod-conmon-bd97203eb2ea979ccc1ebf5c1efb7741179c19bcc0097876dbc9acf3ce96c497.scope: Deactivated successfully.
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.702 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.816 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.819 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.820 239942 INFO nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Creating image(s)#033[00m
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.846 239942 DEBUG nova.storage.rbd_utils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image 175b46aa-ae57-41db-b77d-c8cdb978701f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:51:23 np0005603435 podman[257942]: 2026-01-31 04:51:23.854001291 +0000 UTC m=+0.062554660 container create 7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jepsen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.877 239942 DEBUG nova.storage.rbd_utils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image 175b46aa-ae57-41db-b77d-c8cdb978701f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:51:23 np0005603435 systemd[1]: Started libpod-conmon-7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24.scope.
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.911 239942 DEBUG nova.storage.rbd_utils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image 175b46aa-ae57-41db-b77d-c8cdb978701f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:51:23 np0005603435 podman[257942]: 2026-01-31 04:51:23.826359031 +0000 UTC m=+0.034912440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.918 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "781bd6915ca6751a99242662a4a6a298c3738a9f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.920 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "781bd6915ca6751a99242662a4a6a298c3738a9f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:23 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:51:23 np0005603435 nova_compute[239938]: 2026-01-31 04:51:23.927 239942 DEBUG nova.policy [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb6c7d8ff99f43cb94670fd4096d652a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f926501f874644cf9ffda466c84e710b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:51:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5047f4ed5e20589eaebf02a25fe9387da2883d9117cb55027357a928d490e353/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5047f4ed5e20589eaebf02a25fe9387da2883d9117cb55027357a928d490e353/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5047f4ed5e20589eaebf02a25fe9387da2883d9117cb55027357a928d490e353/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5047f4ed5e20589eaebf02a25fe9387da2883d9117cb55027357a928d490e353/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:23 np0005603435 podman[257942]: 2026-01-31 04:51:23.960184314 +0000 UTC m=+0.168737693 container init 7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:51:23 np0005603435 podman[257942]: 2026-01-31 04:51:23.9669496 +0000 UTC m=+0.175502939 container start 7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 30 23:51:23 np0005603435 podman[257942]: 2026-01-31 04:51:23.970771144 +0000 UTC m=+0.179324503 container attach 7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.149 239942 DEBUG nova.virt.libvirt.imagebackend [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Image locations are: [{'url': 'rbd://95d2f419-0dd0-56f2-a094-353f8c7597ed/images/cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://95d2f419-0dd0-56f2-a094-353f8c7597ed/images/cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.211 239942 DEBUG nova.virt.libvirt.imagebackend [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Selected location: {'url': 'rbd://95d2f419-0dd0-56f2-a094-353f8c7597ed/images/cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.212 239942 DEBUG nova.storage.rbd_utils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] cloning images/cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef@snap to None/175b46aa-ae57-41db-b77d-c8cdb978701f_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]: {
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:    "0": [
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:        {
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "devices": [
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "/dev/loop3"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            ],
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_name": "ceph_lv0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_size": "21470642176",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "name": "ceph_lv0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "tags": {
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cluster_name": "ceph",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.crush_device_class": "",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.encrypted": "0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.objectstore": "bluestore",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osd_id": "0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.type": "block",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.vdo": "0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.with_tpm": "0"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            },
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "type": "block",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "vg_name": "ceph_vg0"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:        }
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:    ],
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:    "1": [
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:        {
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "devices": [
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "/dev/loop4"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            ],
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_name": "ceph_lv1",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_size": "21470642176",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "name": "ceph_lv1",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "tags": {
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cluster_name": "ceph",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.crush_device_class": "",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.encrypted": "0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.objectstore": "bluestore",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osd_id": "1",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.type": "block",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.vdo": "0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.with_tpm": "0"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            },
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "type": "block",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "vg_name": "ceph_vg1"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:        }
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:    ],
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:    "2": [
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:        {
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "devices": [
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "/dev/loop5"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            ],
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_name": "ceph_lv2",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_size": "21470642176",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "name": "ceph_lv2",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "tags": {
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.cluster_name": "ceph",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.crush_device_class": "",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.encrypted": "0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.objectstore": "bluestore",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osd_id": "2",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.type": "block",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.vdo": "0",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:                "ceph.with_tpm": "0"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            },
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "type": "block",
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:            "vg_name": "ceph_vg2"
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:        }
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]:    ]
Jan 30 23:51:24 np0005603435 musing_jepsen[258009]: }
Jan 30 23:51:24 np0005603435 systemd[1]: libpod-7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24.scope: Deactivated successfully.
Jan 30 23:51:24 np0005603435 podman[257942]: 2026-01-31 04:51:24.264778937 +0000 UTC m=+0.473332316 container died 7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:51:24 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5047f4ed5e20589eaebf02a25fe9387da2883d9117cb55027357a928d490e353-merged.mount: Deactivated successfully.
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.306 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "781bd6915ca6751a99242662a4a6a298c3738a9f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:24 np0005603435 podman[257942]: 2026-01-31 04:51:24.316418548 +0000 UTC m=+0.524971917 container remove 7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 30 23:51:24 np0005603435 systemd[1]: libpod-conmon-7245d7e1fc20614d856afc955b2ded2fd057df24456c82875f8224f5e25c5a24.scope: Deactivated successfully.
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.449 239942 DEBUG nova.objects.instance [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'migration_context' on Instance uuid 175b46aa-ae57-41db-b77d-c8cdb978701f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.484 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.485 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Ensure instance console log exists: /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.486 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.486 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.486 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.503 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.537 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:24 np0005603435 nova_compute[239938]: 2026-01-31 04:51:24.650 239942 DEBUG nova.network.neutron [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Successfully created port: c2537bd0-5e4f-4c22-95b4-751b80b76a81 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:51:24 np0005603435 podman[258217]: 2026-01-31 04:51:24.782133635 +0000 UTC m=+0.032345606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:51:24 np0005603435 podman[258217]: 2026-01-31 04:51:24.941513457 +0000 UTC m=+0.191725428 container create 523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:51:24 np0005603435 systemd[1]: Started libpod-conmon-523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb.scope.
Jan 30 23:51:25 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:51:25 np0005603435 podman[258217]: 2026-01-31 04:51:25.113450566 +0000 UTC m=+0.363662537 container init 523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_greider, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:51:25 np0005603435 podman[258217]: 2026-01-31 04:51:25.120651674 +0000 UTC m=+0.370863615 container start 523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_greider, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 30 23:51:25 np0005603435 intelligent_greider[258233]: 167 167
Jan 30 23:51:25 np0005603435 systemd[1]: libpod-523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb.scope: Deactivated successfully.
Jan 30 23:51:25 np0005603435 podman[258217]: 2026-01-31 04:51:25.169643579 +0000 UTC m=+0.419855540 container attach 523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 30 23:51:25 np0005603435 podman[258217]: 2026-01-31 04:51:25.170314375 +0000 UTC m=+0.420526346 container died 523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 30 23:51:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 30 23:51:25 np0005603435 nova_compute[239938]: 2026-01-31 04:51:25.259 239942 DEBUG nova.network.neutron [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Successfully updated port: c2537bd0-5e4f-4c22-95b4-751b80b76a81 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:51:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 30 23:51:25 np0005603435 nova_compute[239938]: 2026-01-31 04:51:25.276 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:51:25 np0005603435 nova_compute[239938]: 2026-01-31 04:51:25.276 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquired lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:51:25 np0005603435 nova_compute[239938]: 2026-01-31 04:51:25.277 239942 DEBUG nova.network.neutron [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:51:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 30 23:51:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.8 MiB/s wr, 268 op/s
Jan 30 23:51:25 np0005603435 systemd[1]: var-lib-containers-storage-overlay-80528eea20737bf5ecbefe82c63f362814779588679ddd7f95ce048213fd9d4f-merged.mount: Deactivated successfully.
Jan 30 23:51:25 np0005603435 podman[258217]: 2026-01-31 04:51:25.365669132 +0000 UTC m=+0.615881093 container remove 523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_greider, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:51:25 np0005603435 systemd[1]: libpod-conmon-523075e41fabcf609c8ae9aa6a529b5921280d957788bba1d79fabdc968012cb.scope: Deactivated successfully.
Jan 30 23:51:25 np0005603435 nova_compute[239938]: 2026-01-31 04:51:25.385 239942 DEBUG nova.compute.manager [req-d703bdf8-a2be-4dec-b1ad-8dea71a9a7be req-ec01f98f-d3e7-4afc-a9e5-9c494620af7a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-changed-c2537bd0-5e4f-4c22-95b4-751b80b76a81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:51:25 np0005603435 nova_compute[239938]: 2026-01-31 04:51:25.387 239942 DEBUG nova.compute.manager [req-d703bdf8-a2be-4dec-b1ad-8dea71a9a7be req-ec01f98f-d3e7-4afc-a9e5-9c494620af7a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Refreshing instance network info cache due to event network-changed-c2537bd0-5e4f-4c22-95b4-751b80b76a81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:51:25 np0005603435 nova_compute[239938]: 2026-01-31 04:51:25.387 239942 DEBUG oslo_concurrency.lockutils [req-d703bdf8-a2be-4dec-b1ad-8dea71a9a7be req-ec01f98f-d3e7-4afc-a9e5-9c494620af7a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:51:25 np0005603435 nova_compute[239938]: 2026-01-31 04:51:25.451 239942 DEBUG nova.network.neutron [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:51:25 np0005603435 podman[258259]: 2026-01-31 04:51:25.549079044 +0000 UTC m=+0.036599481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:51:25 np0005603435 podman[258259]: 2026-01-31 04:51:25.707583974 +0000 UTC m=+0.195104341 container create ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_burnell, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:51:25 np0005603435 systemd[1]: Started libpod-conmon-ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42.scope.
Jan 30 23:51:25 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:51:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fc6c290b3d8f61d0c8737fbf732b548f69a5ebc7aa29bcd2ba4fcacd198c76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fc6c290b3d8f61d0c8737fbf732b548f69a5ebc7aa29bcd2ba4fcacd198c76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fc6c290b3d8f61d0c8737fbf732b548f69a5ebc7aa29bcd2ba4fcacd198c76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fc6c290b3d8f61d0c8737fbf732b548f69a5ebc7aa29bcd2ba4fcacd198c76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:51:25 np0005603435 podman[258259]: 2026-01-31 04:51:25.880797485 +0000 UTC m=+0.368317952 container init ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_burnell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:51:25 np0005603435 podman[258259]: 2026-01-31 04:51:25.886931236 +0000 UTC m=+0.374451633 container start ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_burnell, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:51:26 np0005603435 podman[258259]: 2026-01-31 04:51:26.077965696 +0000 UTC m=+0.565486103 container attach ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.166 239942 DEBUG nova.network.neutron [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updating instance_info_cache with network_info: [{"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.188 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Releasing lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.188 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Instance network_info: |[{"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.188 239942 DEBUG oslo_concurrency.lockutils [req-d703bdf8-a2be-4dec-b1ad-8dea71a9a7be req-ec01f98f-d3e7-4afc-a9e5-9c494620af7a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.188 239942 DEBUG nova.network.neutron [req-d703bdf8-a2be-4dec-b1ad-8dea71a9a7be req-ec01f98f-d3e7-4afc-a9e5-9c494620af7a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Refreshing network info cache for port c2537bd0-5e4f-4c22-95b4-751b80b76a81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.190 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Start _get_guest_xml network_info=[{"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T04:51:14Z,direct_url=<?>,disk_format='raw',id=cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1432737515',owner='f926501f874644cf9ffda466c84e710b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T04:51:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.195 239942 WARNING nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.205 239942 DEBUG nova.virt.libvirt.host [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.206 239942 DEBUG nova.virt.libvirt.host [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.209 239942 DEBUG nova.virt.libvirt.host [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.209 239942 DEBUG nova.virt.libvirt.host [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.210 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.210 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T04:51:14Z,direct_url=<?>,disk_format='raw',id=cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1432737515',owner='f926501f874644cf9ffda466c84e710b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T04:51:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.210 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.210 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.211 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.211 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.211 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.211 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.211 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.211 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.211 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.212 239942 DEBUG nova.virt.hardware [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.214 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:26 np0005603435 lvm[258376]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:51:26 np0005603435 lvm[258377]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:51:26 np0005603435 lvm[258376]: VG ceph_vg2 finished
Jan 30 23:51:26 np0005603435 lvm[258377]: VG ceph_vg1 finished
Jan 30 23:51:26 np0005603435 lvm[258375]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:51:26 np0005603435 lvm[258375]: VG ceph_vg0 finished
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.571202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835086571269, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1060, "num_deletes": 256, "total_data_size": 1318108, "memory_usage": 1343840, "flush_reason": "Manual Compaction"}
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835086587477, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1301476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25366, "largest_seqno": 26425, "table_properties": {"data_size": 1296020, "index_size": 2852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12322, "raw_average_key_size": 20, "raw_value_size": 1284956, "raw_average_value_size": 2166, "num_data_blocks": 127, "num_entries": 593, "num_filter_entries": 593, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769835027, "oldest_key_time": 1769835027, "file_creation_time": 1769835086, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 16299 microseconds, and 2773 cpu microseconds.
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.587522) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1301476 bytes OK
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.587543) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.601633) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.601661) EVENT_LOG_v1 {"time_micros": 1769835086601651, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.601684) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1312916, prev total WAL file size 1312916, number of live WAL files 2.
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.602360) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1270KB)], [56(10MB)]
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835086602601, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12566504, "oldest_snapshot_seqno": -1}
Jan 30 23:51:26 np0005603435 pedantic_burnell[258276]: {}
Jan 30 23:51:26 np0005603435 systemd[1]: libpod-ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42.scope: Deactivated successfully.
Jan 30 23:51:26 np0005603435 podman[258259]: 2026-01-31 04:51:26.691106029 +0000 UTC m=+1.178626406 container died ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_burnell, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:51:26 np0005603435 systemd[1]: libpod-ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42.scope: Consumed 1.046s CPU time.
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.722 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811688338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.790 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5461 keys, 10868812 bytes, temperature: kUnknown
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835086792327, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10868812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10825759, "index_size": 28287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 135980, "raw_average_key_size": 24, "raw_value_size": 10720940, "raw_average_value_size": 1963, "num_data_blocks": 1165, "num_entries": 5461, "num_filter_entries": 5461, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769835086, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.821 239942 DEBUG nova.storage.rbd_utils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image 175b46aa-ae57-41db-b77d-c8cdb978701f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:51:26 np0005603435 nova_compute[239938]: 2026-01-31 04:51:26.825 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.792708) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10868812 bytes
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.857176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.2 rd, 57.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.7 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(18.0) write-amplify(8.4) OK, records in: 5986, records dropped: 525 output_compression: NoCompression
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.857219) EVENT_LOG_v1 {"time_micros": 1769835086857200, "job": 30, "event": "compaction_finished", "compaction_time_micros": 189941, "compaction_time_cpu_micros": 26231, "output_level": 6, "num_output_files": 1, "total_output_size": 10868812, "num_input_records": 5986, "num_output_records": 5461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835086857572, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835086859008, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.602083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.859120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.859133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.859136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.859139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:51:26 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:51:26.859142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:51:26 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d3fc6c290b3d8f61d0c8737fbf732b548f69a5ebc7aa29bcd2ba4fcacd198c76-merged.mount: Deactivated successfully.
Jan 30 23:51:27 np0005603435 podman[258259]: 2026-01-31 04:51:27.216961927 +0000 UTC m=+1.704482334 container remove ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_burnell, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:51:27 np0005603435 systemd[1]: libpod-conmon-ae90a80e3ac5f475a3ab3502748aefba6fb90dae32d3914d4362eead7109ed42.scope: Deactivated successfully.
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:51:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 709 KiB/s rd, 1.4 MiB/s wr, 315 op/s
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924596534' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.355 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.358 239942 DEBUG nova.virt.libvirt.vif [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-38530597',display_name='tempest-TestStampPattern-server-38530597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-38530597',id=11,image_ref='cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE36VHJ+yy2SXlSY6zQF7e9BvcMYc8SPWqyVX2ZxItgDCfKt1gLcAFRAPVxsIPrChTqlOOAcxm0TrregMrTGHoD8jXmVh+9yf3UY3pMaZlSN/M9091Lc3gRO27izGQve6Q==',key_name='tempest-TestStampPattern-1698214235',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f926501f874644cf9ffda466c84e710b',ramdisk_id='',reservation_id='r-45qb2od0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d99b6e7d-0d41-4261-8dc8-687109c9a0fa',image_min_disk='1',image_min_ram='0',image_owner_id='f926501f874644cf9ffda466c84e710b',image_owner_project_name='tempest-TestStampPattern-567815244',image_owner_user_name='tempest-TestStampPattern-567815244-project-member',image_user_id='bb6c7d8ff99f43cb94670fd4096d652a',network_allocated='True',owner_project_name='tempest-TestStampPattern-567815244',owner_user_name='tempest-TestStampPattern-567815244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:51:23Z,user_data=None,user_id='bb6c7d8ff99f43cb94670fd4096d652a',uuid=175b46aa-ae57-41db-b77d-c8cdb978701f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.358 239942 DEBUG nova.network.os_vif_util [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converting VIF {"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.360 239942 DEBUG nova.network.os_vif_util [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:99:7d,bridge_name='br-int',has_traffic_filtering=True,id=c2537bd0-5e4f-4c22-95b4-751b80b76a81,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2537bd0-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.361 239942 DEBUG nova.objects.instance [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'pci_devices' on Instance uuid 175b46aa-ae57-41db-b77d-c8cdb978701f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.376 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <uuid>175b46aa-ae57-41db-b77d-c8cdb978701f</uuid>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <name>instance-0000000b</name>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestStampPattern-server-38530597</nova:name>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:51:26</nova:creationTime>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <nova:user uuid="bb6c7d8ff99f43cb94670fd4096d652a">tempest-TestStampPattern-567815244-project-member</nova:user>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <nova:project uuid="f926501f874644cf9ffda466c84e710b">tempest-TestStampPattern-567815244</nova:project>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <nova:port uuid="c2537bd0-5e4f-4c22-95b4-751b80b76a81">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <entry name="serial">175b46aa-ae57-41db-b77d-c8cdb978701f</entry>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <entry name="uuid">175b46aa-ae57-41db-b77d-c8cdb978701f</entry>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/175b46aa-ae57-41db-b77d-c8cdb978701f_disk">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/175b46aa-ae57-41db-b77d-c8cdb978701f_disk.config">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:b5:99:7d"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <target dev="tapc2537bd0-5e"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f/console.log" append="off"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <input type="keyboard" bus="usb"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:51:27 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:51:27 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:51:27 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:51:27 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.378 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Preparing to wait for external event network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.378 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.379 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.379 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.380 239942 DEBUG nova.virt.libvirt.vif [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-38530597',display_name='tempest-TestStampPattern-server-38530597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-38530597',id=11,image_ref='cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE36VHJ+yy2SXlSY6zQF7e9BvcMYc8SPWqyVX2ZxItgDCfKt1gLcAFRAPVxsIPrChTqlOOAcxm0TrregMrTGHoD8jXmVh+9yf3UY3pMaZlSN/M9091Lc3gRO27izGQve6Q==',key_name='tempest-TestStampPattern-1698214235',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f926501f874644cf9ffda466c84e710b',ramdisk_id='',reservation_id='r-45qb2od0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d99b6e7d-0d41-4261-8dc8-687109c9a0fa',image_min_disk='1',image_min_ram='0',image_owner_id='f926501f874644cf9ffda466c84e710b',image_owner_project_name='tempest-TestStampPattern-567815244',image_owner_user_name='tempest-TestStampPattern-567815244-project-member',image_user_id='bb6c7d8ff99f43cb94670fd4096d652a',network_allocated='True',owner_project_name='tempest-TestStampPattern-567815244',owner_user_name='tempest-TestStampPattern-567815244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:51:23Z,user_data=None,user_id='bb6c7d8ff99f43cb94670fd4096d652a',uuid=175b46aa-ae57-41db-b77d-c8cdb978701f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.381 239942 DEBUG nova.network.os_vif_util [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converting VIF {"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.382 239942 DEBUG nova.network.os_vif_util [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:99:7d,bridge_name='br-int',has_traffic_filtering=True,id=c2537bd0-5e4f-4c22-95b4-751b80b76a81,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2537bd0-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.382 239942 DEBUG os_vif [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:99:7d,bridge_name='br-int',has_traffic_filtering=True,id=c2537bd0-5e4f-4c22-95b4-751b80b76a81,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2537bd0-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.383 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.384 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.385 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.389 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.389 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2537bd0-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.390 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2537bd0-5e, col_values=(('external_ids', {'iface-id': 'c2537bd0-5e4f-4c22-95b4-751b80b76a81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:99:7d', 'vm-uuid': '175b46aa-ae57-41db-b77d-c8cdb978701f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.392 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:27 np0005603435 NetworkManager[49097]: <info>  [1769835087.3936] manager: (tapc2537bd0-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.400 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.402 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.403 239942 INFO os_vif [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:99:7d,bridge_name='br-int',has_traffic_filtering=True,id=c2537bd0-5e4f-4c22-95b4-751b80b76a81,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2537bd0-5e')#033[00m
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.463 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.464 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.464 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No VIF found with MAC fa:16:3e:b5:99:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.465 239942 INFO nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Using config drive#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.490 239942 DEBUG nova.storage.rbd_utils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image 175b46aa-ae57-41db-b77d-c8cdb978701f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920946015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920946015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.797 239942 DEBUG nova.network.neutron [req-d703bdf8-a2be-4dec-b1ad-8dea71a9a7be req-ec01f98f-d3e7-4afc-a9e5-9c494620af7a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updated VIF entry in instance network info cache for port c2537bd0-5e4f-4c22-95b4-751b80b76a81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.797 239942 DEBUG nova.network.neutron [req-d703bdf8-a2be-4dec-b1ad-8dea71a9a7be req-ec01f98f-d3e7-4afc-a9e5-9c494620af7a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updating instance_info_cache with network_info: [{"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.812 239942 DEBUG oslo_concurrency.lockutils [req-d703bdf8-a2be-4dec-b1ad-8dea71a9a7be req-ec01f98f-d3e7-4afc-a9e5-9c494620af7a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.897 239942 INFO nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Creating config drive at /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f/disk.config#033[00m
Jan 30 23:51:27 np0005603435 nova_compute[239938]: 2026-01-31 04:51:27.903 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpid201g11 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:51:27 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:51:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 30 23:51:28 np0005603435 nova_compute[239938]: 2026-01-31 04:51:28.031 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpid201g11" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:28 np0005603435 nova_compute[239938]: 2026-01-31 04:51:28.155 239942 DEBUG nova.storage.rbd_utils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] rbd image 175b46aa-ae57-41db-b77d-c8cdb978701f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:51:28 np0005603435 nova_compute[239938]: 2026-01-31 04:51:28.161 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f/disk.config 175b46aa-ae57-41db-b77d-c8cdb978701f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:51:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1952617159' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1952617159' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 6.5 KiB/s wr, 160 op/s
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.430 239942 DEBUG oslo_concurrency.processutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f/disk.config 175b46aa-ae57-41db-b77d-c8cdb978701f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.432 239942 INFO nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Deleting local config drive /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f/disk.config because it was imported into RBD.#033[00m
Jan 30 23:51:29 np0005603435 kernel: tapc2537bd0-5e: entered promiscuous mode
Jan 30 23:51:29 np0005603435 NetworkManager[49097]: <info>  [1769835089.4899] manager: (tapc2537bd0-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Jan 30 23:51:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:29Z|00104|binding|INFO|Claiming lport c2537bd0-5e4f-4c22-95b4-751b80b76a81 for this chassis.
Jan 30 23:51:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:29Z|00105|binding|INFO|c2537bd0-5e4f-4c22-95b4-751b80b76a81: Claiming fa:16:3e:b5:99:7d 10.100.0.6
Jan 30 23:51:29 np0005603435 systemd-udevd[258373]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.489 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:29 np0005603435 NetworkManager[49097]: <info>  [1769835089.5059] device (tapc2537bd0-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:51:29 np0005603435 NetworkManager[49097]: <info>  [1769835089.5068] device (tapc2537bd0-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:51:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:29Z|00106|binding|INFO|Setting lport c2537bd0-5e4f-4c22-95b4-751b80b76a81 ovn-installed in OVS
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.508 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.513 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.539 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:29Z|00107|binding|INFO|Setting lport c2537bd0-5e4f-4c22-95b4-751b80b76a81 up in Southbound
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.574 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:99:7d 10.100.0.6'], port_security=['fa:16:3e:b5:99:7d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '175b46aa-ae57-41db-b77d-c8cdb978701f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f926501f874644cf9ffda466c84e710b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05382d3e-edd0-4646-aff2-95f9f0df0d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e385d9e-e365-4760-9c59-b6cbbb99eaf1, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=c2537bd0-5e4f-4c22-95b4-751b80b76a81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.576 156017 INFO neutron.agent.ovn.metadata.agent [-] Port c2537bd0-5e4f-4c22-95b4-751b80b76a81 in datapath 55d16559-9723-4f0a-a23e-90d04ca1bb05 bound to our chassis#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.579 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55d16559-9723-4f0a-a23e-90d04ca1bb05#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.597 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[93dfe081-7e9b-449f-a3db-a42501cafda0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:29 np0005603435 systemd-machined[208030]: New machine qemu-11-instance-0000000b.
Jan 30 23:51:29 np0005603435 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.634 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[ca256351-5c78-493a-a6de-f10ca61b9d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.638 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[8a12d097-1a87-455b-b482-57268e4f5f7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.673 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ab0323-0e51-418f-a0ff-4ed2e6175057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.696 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2ab719-3672-4dd2-8da9-19306e414ff9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55d16559-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:06:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409814, 'reachable_time': 30203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258545, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.711 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[587d8608-5ac0-49a7-acb4-a533e481932f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55d16559-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409826, 'tstamp': 409826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258547, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55d16559-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409828, 'tstamp': 409828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258547, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.713 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d16559-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.714 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.716 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.716 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55d16559-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.716 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.717 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55d16559-90, col_values=(('external_ids', {'iface-id': 'e2b210b0-d66c-49f0-beb5-0ac736a943c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:51:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:29.717 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.837 239942 DEBUG nova.compute.manager [req-09ddd3b5-4d4e-41ed-a210-0bfbbffe4c09 req-bb784a61-0218-4a9d-93fc-fb70ae9de862 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.838 239942 DEBUG oslo_concurrency.lockutils [req-09ddd3b5-4d4e-41ed-a210-0bfbbffe4c09 req-bb784a61-0218-4a9d-93fc-fb70ae9de862 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.838 239942 DEBUG oslo_concurrency.lockutils [req-09ddd3b5-4d4e-41ed-a210-0bfbbffe4c09 req-bb784a61-0218-4a9d-93fc-fb70ae9de862 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.839 239942 DEBUG oslo_concurrency.lockutils [req-09ddd3b5-4d4e-41ed-a210-0bfbbffe4c09 req-bb784a61-0218-4a9d-93fc-fb70ae9de862 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:29 np0005603435 nova_compute[239938]: 2026-01-31 04:51:29.839 239942 DEBUG nova.compute.manager [req-09ddd3b5-4d4e-41ed-a210-0bfbbffe4c09 req-bb784a61-0218-4a9d-93fc-fb70ae9de862 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Processing event network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.429 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835090.429114, 175b46aa-ae57-41db-b77d-c8cdb978701f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.430 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] VM Started (Lifecycle Event)#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.433 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.436 239942 DEBUG nova.virt.libvirt.driver [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.439 239942 INFO nova.virt.libvirt.driver [-] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Instance spawned successfully.#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.440 239942 INFO nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Took 6.62 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.440 239942 DEBUG nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.474 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.478 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.529 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.530 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835090.4292703, 175b46aa-ae57-41db-b77d-c8cdb978701f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.530 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.544 239942 INFO nova.compute.manager [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Took 7.92 seconds to build instance.#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.575 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.579 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835090.4350672, 175b46aa-ae57-41db-b77d-c8cdb978701f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.579 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.586 239942 DEBUG oslo_concurrency.lockutils [None req-ed959480-8e0f-4eba-b4ba-9bee20734399 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.644 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:51:30 np0005603435 nova_compute[239938]: 2026-01-31 04:51:30.649 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:51:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 248 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 23 KiB/s wr, 140 op/s
Jan 30 23:51:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 30 23:51:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 30 23:51:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 30 23:51:31 np0005603435 nova_compute[239938]: 2026-01-31 04:51:31.917 239942 DEBUG nova.compute.manager [req-91438a7a-1426-48b7-996a-85f474792fa4 req-128c51f5-9e4b-48f5-b181-09876a0a58aa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:51:31 np0005603435 nova_compute[239938]: 2026-01-31 04:51:31.918 239942 DEBUG oslo_concurrency.lockutils [req-91438a7a-1426-48b7-996a-85f474792fa4 req-128c51f5-9e4b-48f5-b181-09876a0a58aa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:31 np0005603435 nova_compute[239938]: 2026-01-31 04:51:31.918 239942 DEBUG oslo_concurrency.lockutils [req-91438a7a-1426-48b7-996a-85f474792fa4 req-128c51f5-9e4b-48f5-b181-09876a0a58aa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:31 np0005603435 nova_compute[239938]: 2026-01-31 04:51:31.919 239942 DEBUG oslo_concurrency.lockutils [req-91438a7a-1426-48b7-996a-85f474792fa4 req-128c51f5-9e4b-48f5-b181-09876a0a58aa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:31 np0005603435 nova_compute[239938]: 2026-01-31 04:51:31.919 239942 DEBUG nova.compute.manager [req-91438a7a-1426-48b7-996a-85f474792fa4 req-128c51f5-9e4b-48f5-b181-09876a0a58aa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] No waiting events found dispatching network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:51:31 np0005603435 nova_compute[239938]: 2026-01-31 04:51:31.919 239942 WARNING nova.compute.manager [req-91438a7a-1426-48b7-996a-85f474792fa4 req-128c51f5-9e4b-48f5-b181-09876a0a58aa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received unexpected event network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:51:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/585876390' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/585876390' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:32 np0005603435 nova_compute[239938]: 2026-01-31 04:51:32.394 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 30 23:51:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 30 23:51:32 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/596179036' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/596179036' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 248 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 27 KiB/s wr, 106 op/s
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/718481273' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/718481273' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 30 23:51:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 30 23:51:34 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 30 23:51:34 np0005603435 nova_compute[239938]: 2026-01-31 04:51:34.543 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:34 np0005603435 nova_compute[239938]: 2026-01-31 04:51:34.598 239942 DEBUG nova.compute.manager [req-a3073384-aae8-410a-9f30-c53f60ae3d56 req-5e23cbe5-d657-4a83-b860-f979da5acdfa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-changed-c2537bd0-5e4f-4c22-95b4-751b80b76a81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:51:34 np0005603435 nova_compute[239938]: 2026-01-31 04:51:34.599 239942 DEBUG nova.compute.manager [req-a3073384-aae8-410a-9f30-c53f60ae3d56 req-5e23cbe5-d657-4a83-b860-f979da5acdfa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Refreshing instance network info cache due to event network-changed-c2537bd0-5e4f-4c22-95b4-751b80b76a81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:51:34 np0005603435 nova_compute[239938]: 2026-01-31 04:51:34.599 239942 DEBUG oslo_concurrency.lockutils [req-a3073384-aae8-410a-9f30-c53f60ae3d56 req-5e23cbe5-d657-4a83-b860-f979da5acdfa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:51:34 np0005603435 nova_compute[239938]: 2026-01-31 04:51:34.599 239942 DEBUG oslo_concurrency.lockutils [req-a3073384-aae8-410a-9f30-c53f60ae3d56 req-5e23cbe5-d657-4a83-b860-f979da5acdfa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:51:34 np0005603435 nova_compute[239938]: 2026-01-31 04:51:34.599 239942 DEBUG nova.network.neutron [req-a3073384-aae8-410a-9f30-c53f60ae3d56 req-5e23cbe5-d657-4a83-b860-f979da5acdfa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Refreshing network info cache for port c2537bd0-5e4f-4c22-95b4-751b80b76a81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:51:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 248 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 29 KiB/s wr, 201 op/s
Jan 30 23:51:35 np0005603435 nova_compute[239938]: 2026-01-31 04:51:35.705 239942 DEBUG nova.network.neutron [req-a3073384-aae8-410a-9f30-c53f60ae3d56 req-5e23cbe5-d657-4a83-b860-f979da5acdfa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updated VIF entry in instance network info cache for port c2537bd0-5e4f-4c22-95b4-751b80b76a81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:51:35 np0005603435 nova_compute[239938]: 2026-01-31 04:51:35.705 239942 DEBUG nova.network.neutron [req-a3073384-aae8-410a-9f30-c53f60ae3d56 req-5e23cbe5-d657-4a83-b860-f979da5acdfa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updating instance_info_cache with network_info: [{"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:51:35 np0005603435 nova_compute[239938]: 2026-01-31 04:51:35.722 239942 DEBUG oslo_concurrency.lockutils [req-a3073384-aae8-410a-9f30-c53f60ae3d56 req-5e23cbe5-d657-4a83-b860-f979da5acdfa c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:51:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2898569914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2898569914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:51:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:51:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:51:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:51:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:51:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:51:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 248 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 7.3 KiB/s wr, 268 op/s
Jan 30 23:51:37 np0005603435 nova_compute[239938]: 2026-01-31 04:51:37.395 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 30 23:51:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 30 23:51:37 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 30 23:51:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 248 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.0 KiB/s wr, 170 op/s
Jan 30 23:51:39 np0005603435 nova_compute[239938]: 2026-01-31 04:51:39.545 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 30 23:51:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 30 23:51:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 30 23:51:41 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:41Z|00016|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.6
Jan 30 23:51:41 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:41Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b5:99:7d 10.100.0.6
Jan 30 23:51:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 248 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.5 KiB/s wr, 184 op/s
Jan 30 23:51:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 30 23:51:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 30 23:51:42 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 30 23:51:42 np0005603435 nova_compute[239938]: 2026-01-31 04:51:42.397 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/663280583' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/663280583' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 282 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 126 op/s
Jan 30 23:51:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257902616' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257902616' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:44 np0005603435 nova_compute[239938]: 2026-01-31 04:51:44.548 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:45Z|00018|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.6
Jan 30 23:51:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:45Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b5:99:7d 10.100.0.6
Jan 30 23:51:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 309 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Jan 30 23:51:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1086682987' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1086682987' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 30 23:51:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 30 23:51:46 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 30 23:51:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:46Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:99:7d 10.100.0.6
Jan 30 23:51:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:51:46Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:99:7d 10.100.0.6
Jan 30 23:51:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 30 23:51:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 30 23:51:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 30 23:51:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 288 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 271 op/s
Jan 30 23:51:47 np0005603435 nova_compute[239938]: 2026-01-31 04:51:47.400 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1453790913' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1453790913' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 288 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 166 op/s
Jan 30 23:51:49 np0005603435 nova_compute[239938]: 2026-01-31 04:51:49.551 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 266 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Jan 30 23:51:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 30 23:51:52 np0005603435 nova_compute[239938]: 2026-01-31 04:51:52.403 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 30 23:51:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 30 23:51:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 30 23:51:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 30 23:51:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 30 23:51:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 29 KiB/s wr, 74 op/s
Jan 30 23:51:54 np0005603435 podman[258593]: 2026-01-31 04:51:54.137479037 +0000 UTC m=+0.094062841 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 30 23:51:54 np0005603435 podman[258594]: 2026-01-31 04:51:54.147086122 +0000 UTC m=+0.103379589 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:51:54 np0005603435 nova_compute[239938]: 2026-01-31 04:51:54.554 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 30 23:51:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 30 23:51:55 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 30 23:51:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 11 KiB/s wr, 85 op/s
Jan 30 23:51:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:55.917 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:51:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:55.918 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:51:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:51:55.919 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:51:57 np0005603435 nova_compute[239938]: 2026-01-31 04:51:57.299 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 44 op/s
Jan 30 23:51:57 np0005603435 nova_compute[239938]: 2026-01-31 04:51:57.405 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:51:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:51:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 30 23:51:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 30 23:51:57 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 30 23:51:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:51:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951573568' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:51:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:51:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951573568' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:51:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.3 KiB/s wr, 28 op/s
Jan 30 23:51:59 np0005603435 nova_compute[239938]: 2026-01-31 04:51:59.609 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 5.2 KiB/s wr, 38 op/s
Jan 30 23:52:02 np0005603435 nova_compute[239938]: 2026-01-31 04:52:02.408 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:02 np0005603435 nova_compute[239938]: 2026-01-31 04:52:02.474 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 30 23:52:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 30 23:52:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 30 23:52:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.1 KiB/s wr, 28 op/s
Jan 30 23:52:04 np0005603435 nova_compute[239938]: 2026-01-31 04:52:04.611 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.0 KiB/s wr, 23 op/s
Jan 30 23:52:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 30 23:52:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 30 23:52:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:52:06
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.mgr', 'vms']
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:52:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 30 23:52:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 30 23:52:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 30 23:52:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:52:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/222762200' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:52:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.264 239942 DEBUG oslo_concurrency.lockutils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.264 239942 DEBUG oslo_concurrency.lockutils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.278 239942 DEBUG nova.objects.instance [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'flavor' on Instance uuid 175b46aa-ae57-41db-b77d-c8cdb978701f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.333 239942 DEBUG oslo_concurrency.lockutils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 7.2 KiB/s wr, 86 op/s
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.411 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.532 239942 DEBUG oslo_concurrency.lockutils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.533 239942 DEBUG oslo_concurrency.lockutils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.534 239942 INFO nova.compute.manager [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Attaching volume d24efa65-b716-40b5-a10c-908d6f95ba15 to /dev/vdb#033[00m
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.781 239942 DEBUG os_brick.utils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.783 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.796 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.796 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[8e181876-606a-4fcf-b0bc-948f66cada03]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.798 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.807 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.807 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[82d18a2d-8023-456f-913d-f752d74e1b97]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.809 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.819 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.820 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[0970660d-a064-47da-8197-2b82862ef418]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.822 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[203dd78f-b9dc-4192-9bd3-68361916cc78]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.823 239942 DEBUG oslo_concurrency.processutils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.850 239942 DEBUG oslo_concurrency.processutils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.853 239942 DEBUG os_brick.initiator.connectors.lightos [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.853 239942 DEBUG os_brick.initiator.connectors.lightos [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.853 239942 DEBUG os_brick.initiator.connectors.lightos [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.854 239942 DEBUG os_brick.utils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:52:07 np0005603435 nova_compute[239938]: 2026-01-31 04:52:07.854 239942 DEBUG nova.virt.block_device [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updating existing volume attachment record: 0e050bd7-4373-44b7-94a8-845760f960c2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:52:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:52:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:52:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:52:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1251856204' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:52:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:52:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1251856204' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:52:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:52:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623203986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.097 239942 DEBUG nova.objects.instance [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'flavor' on Instance uuid 175b46aa-ae57-41db-b77d-c8cdb978701f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.125 239942 DEBUG nova.virt.libvirt.driver [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Attempting to attach volume d24efa65-b716-40b5-a10c-908d6f95ba15 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.128 239942 DEBUG nova.virt.libvirt.guest [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:52:09 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:52:09 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-d24efa65-b716-40b5-a10c-908d6f95ba15">
Jan 30 23:52:09 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:52:09 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:52:09 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:52:09 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:52:09 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:52:09 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:52:09 np0005603435 nova_compute[239938]:  <serial>d24efa65-b716-40b5-a10c-908d6f95ba15</serial>
Jan 30 23:52:09 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:52:09 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:52:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 266 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 4.3 KiB/s wr, 70 op/s
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.414 239942 DEBUG nova.virt.libvirt.driver [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.415 239942 DEBUG nova.virt.libvirt.driver [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.415 239942 DEBUG nova.virt.libvirt.driver [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.416 239942 DEBUG nova.virt.libvirt.driver [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] No VIF found with MAC fa:16:3e:b5:99:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.618 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:09 np0005603435 nova_compute[239938]: 2026-01-31 04:52:09.627 239942 DEBUG oslo_concurrency.lockutils [None req-08a1ea68-7d27-40d9-b6d9-7216bb5b5229 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 30 23:52:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 30 23:52:10 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 30 23:52:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 358 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 15 MiB/s wr, 81 op/s
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.735 239942 DEBUG oslo_concurrency.lockutils [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.735 239942 DEBUG oslo_concurrency.lockutils [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.766 239942 INFO nova.compute.manager [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Detaching volume d24efa65-b716-40b5-a10c-908d6f95ba15#033[00m
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.886 239942 INFO nova.virt.block_device [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Attempting to driver detach volume d24efa65-b716-40b5-a10c-908d6f95ba15 from mountpoint /dev/vdb#033[00m
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.898 239942 DEBUG nova.virt.libvirt.driver [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Attempting to detach device vdb from instance 175b46aa-ae57-41db-b77d-c8cdb978701f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.899 239942 DEBUG nova.virt.libvirt.guest [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-d24efa65-b716-40b5-a10c-908d6f95ba15">
Jan 30 23:52:11 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <serial>d24efa65-b716-40b5-a10c-908d6f95ba15</serial>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:52:11 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:52:11 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.923 239942 INFO nova.virt.libvirt.driver [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Successfully detached device vdb from instance 175b46aa-ae57-41db-b77d-c8cdb978701f from the persistent domain config.#033[00m
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.924 239942 DEBUG nova.virt.libvirt.driver [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 175b46aa-ae57-41db-b77d-c8cdb978701f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:52:11 np0005603435 nova_compute[239938]: 2026-01-31 04:52:11.925 239942 DEBUG nova.virt.libvirt.guest [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-d24efa65-b716-40b5-a10c-908d6f95ba15">
Jan 30 23:52:11 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <serial>d24efa65-b716-40b5-a10c-908d6f95ba15</serial>
Jan 30 23:52:11 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:52:11 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:52:11 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:52:12 np0005603435 nova_compute[239938]: 2026-01-31 04:52:12.046 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835132.0460985, 175b46aa-ae57-41db-b77d-c8cdb978701f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:52:12 np0005603435 nova_compute[239938]: 2026-01-31 04:52:12.047 239942 DEBUG nova.virt.libvirt.driver [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 175b46aa-ae57-41db-b77d-c8cdb978701f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:52:12 np0005603435 nova_compute[239938]: 2026-01-31 04:52:12.050 239942 INFO nova.virt.libvirt.driver [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Successfully detached device vdb from instance 175b46aa-ae57-41db-b77d-c8cdb978701f from the live domain config.#033[00m
Jan 30 23:52:12 np0005603435 nova_compute[239938]: 2026-01-31 04:52:12.222 239942 DEBUG nova.objects.instance [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'flavor' on Instance uuid 175b46aa-ae57-41db-b77d-c8cdb978701f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:52:12 np0005603435 nova_compute[239938]: 2026-01-31 04:52:12.277 239942 DEBUG oslo_concurrency.lockutils [None req-f8b1d51f-0908-4072-9832-9aff24cae2df bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:12 np0005603435 nova_compute[239938]: 2026-01-31 04:52:12.414 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 30 23:52:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 30 23:52:12 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 30 23:52:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 695 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 763 KiB/s rd, 64 MiB/s wr, 287 op/s
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.578 239942 DEBUG nova.compute.manager [req-0968072f-f477-45e6-9b80-0f97585e54b4 req-21ac47e6-6352-4bb6-aa8d-abea4abe5eee c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-changed-c2537bd0-5e4f-4c22-95b4-751b80b76a81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.578 239942 DEBUG nova.compute.manager [req-0968072f-f477-45e6-9b80-0f97585e54b4 req-21ac47e6-6352-4bb6-aa8d-abea4abe5eee c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Refreshing instance network info cache due to event network-changed-c2537bd0-5e4f-4c22-95b4-751b80b76a81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.579 239942 DEBUG oslo_concurrency.lockutils [req-0968072f-f477-45e6-9b80-0f97585e54b4 req-21ac47e6-6352-4bb6-aa8d-abea4abe5eee c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.579 239942 DEBUG oslo_concurrency.lockutils [req-0968072f-f477-45e6-9b80-0f97585e54b4 req-21ac47e6-6352-4bb6-aa8d-abea4abe5eee c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.579 239942 DEBUG nova.network.neutron [req-0968072f-f477-45e6-9b80-0f97585e54b4 req-21ac47e6-6352-4bb6-aa8d-abea4abe5eee c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Refreshing network info cache for port c2537bd0-5e4f-4c22-95b4-751b80b76a81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.640 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.640 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.641 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.641 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.641 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.642 239942 INFO nova.compute.manager [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Terminating instance#033[00m
Jan 30 23:52:13 np0005603435 nova_compute[239938]: 2026-01-31 04:52:13.643 239942 DEBUG nova.compute.manager [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:52:14 np0005603435 kernel: tapc2537bd0-5e (unregistering): left promiscuous mode
Jan 30 23:52:14 np0005603435 NetworkManager[49097]: <info>  [1769835134.1330] device (tapc2537bd0-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:52:14 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:14Z|00108|binding|INFO|Releasing lport c2537bd0-5e4f-4c22-95b4-751b80b76a81 from this chassis (sb_readonly=0)
Jan 30 23:52:14 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:14Z|00109|binding|INFO|Setting lport c2537bd0-5e4f-4c22-95b4-751b80b76a81 down in Southbound
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.145 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:14 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:14Z|00110|binding|INFO|Removing iface tapc2537bd0-5e ovn-installed in OVS
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.149 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.154 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.155 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:99:7d 10.100.0.6'], port_security=['fa:16:3e:b5:99:7d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '175b46aa-ae57-41db-b77d-c8cdb978701f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f926501f874644cf9ffda466c84e710b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05382d3e-edd0-4646-aff2-95f9f0df0d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e385d9e-e365-4760-9c59-b6cbbb99eaf1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=c2537bd0-5e4f-4c22-95b4-751b80b76a81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.158 156017 INFO neutron.agent.ovn.metadata.agent [-] Port c2537bd0-5e4f-4c22-95b4-751b80b76a81 in datapath 55d16559-9723-4f0a-a23e-90d04ca1bb05 unbound from our chassis#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.163 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55d16559-9723-4f0a-a23e-90d04ca1bb05#033[00m
Jan 30 23:52:14 np0005603435 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 30 23:52:14 np0005603435 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 13.458s CPU time.
Jan 30 23:52:14 np0005603435 systemd-machined[208030]: Machine qemu-11-instance-0000000b terminated.
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.184 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7cdd61cf-0731-4770-abac-270421fc6a31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.216 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0358e7-56c3-4cc4-9a7e-f8836d1be5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.218 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[39a94efd-f72c-457b-98e3-4ade9c521113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.246 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[b5506108-b3c1-41a8-bd6a-393a746d10d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.267 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a725576d-c971-4289-a76c-45f02a88443d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55d16559-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:06:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409814, 'reachable_time': 30203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258684, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.275 239942 INFO nova.virt.libvirt.driver [-] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Instance destroyed successfully.#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.276 239942 DEBUG nova.objects.instance [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'resources' on Instance uuid 175b46aa-ae57-41db-b77d-c8cdb978701f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.285 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ca75ebb9-75c9-4672-837d-aea21044624f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55d16559-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409826, 'tstamp': 409826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258693, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55d16559-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409828, 'tstamp': 409828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258693, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.289 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d16559-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.291 239942 DEBUG nova.virt.libvirt.vif [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-38530597',display_name='tempest-TestStampPattern-server-38530597',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-38530597',id=11,image_ref='cab354b9-f2c8-46a6-95e1-70b4ce5bf9ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE36VHJ+yy2SXlSY6zQF7e9BvcMYc8SPWqyVX2ZxItgDCfKt1gLcAFRAPVxsIPrChTqlOOAcxm0TrregMrTGHoD8jXmVh+9yf3UY3pMaZlSN/M9091Lc3gRO27izGQve6Q==',key_name='tempest-TestStampPattern-1698214235',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:51:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f926501f874644cf9ffda466c84e710b',ramdisk_id='',reservation_id='r-45qb2od0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d99b6e7d-0d41-4261-8dc8-687109c9a0fa',image_min_disk='1',image_min_ram='0',image_owner_id='f926501f874644cf9ffda466c84e710b',image_owner_project_name='tempest-TestStampPattern-567815244',image_owner_user_name='tempest-TestStampPattern-567815244-project-member',image_user_id='bb6c7d8ff99f43cb94670fd4096d652a',owner_project_name='tempest-TestStampPattern-567815244',owner_user_name='tempest-TestStampPattern-567815244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:51:30Z,user_data=None,user_id='bb6c7d8ff99f43cb94670fd4096d652a',uuid=175b46aa-ae57-41db-b77d-c8cdb978701f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.291 239942 DEBUG nova.network.os_vif_util [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converting VIF {"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.293 239942 DEBUG nova.network.os_vif_util [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:99:7d,bridge_name='br-int',has_traffic_filtering=True,id=c2537bd0-5e4f-4c22-95b4-751b80b76a81,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2537bd0-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.293 239942 DEBUG os_vif [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:99:7d,bridge_name='br-int',has_traffic_filtering=True,id=c2537bd0-5e4f-4c22-95b4-751b80b76a81,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2537bd0-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.296 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.297 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2537bd0-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.296 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55d16559-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.296 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.297 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55d16559-90, col_values=(('external_ids', {'iface-id': 'e2b210b0-d66c-49f0-beb5-0ac736a943c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:14 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:14.298 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.298 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.300 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.303 239942 INFO os_vif [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:99:7d,bridge_name='br-int',has_traffic_filtering=True,id=c2537bd0-5e4f-4c22-95b4-751b80b76a81,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2537bd0-5e')#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.649 239942 DEBUG nova.network.neutron [req-0968072f-f477-45e6-9b80-0f97585e54b4 req-21ac47e6-6352-4bb6-aa8d-abea4abe5eee c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updated VIF entry in instance network info cache for port c2537bd0-5e4f-4c22-95b4-751b80b76a81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.649 239942 DEBUG nova.network.neutron [req-0968072f-f477-45e6-9b80-0f97585e54b4 req-21ac47e6-6352-4bb6-aa8d-abea4abe5eee c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updating instance_info_cache with network_info: [{"id": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "address": "fa:16:3e:b5:99:7d", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2537bd0-5e", "ovs_interfaceid": "c2537bd0-5e4f-4c22-95b4-751b80b76a81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.666 239942 DEBUG oslo_concurrency.lockutils [req-0968072f-f477-45e6-9b80-0f97585e54b4 req-21ac47e6-6352-4bb6-aa8d-abea4abe5eee c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-175b46aa-ae57-41db-b77d-c8cdb978701f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:52:14 np0005603435 nova_compute[239938]: 2026-01-31 04:52:14.681 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.070 239942 INFO nova.virt.libvirt.driver [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Deleting instance files /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f_del#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.071 239942 INFO nova.virt.libvirt.driver [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Deletion of /var/lib/nova/instances/175b46aa-ae57-41db-b77d-c8cdb978701f_del complete#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.193 239942 INFO nova.compute.manager [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Took 1.55 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.194 239942 DEBUG oslo.service.loopingcall [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.195 239942 DEBUG nova.compute.manager [-] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.195 239942 DEBUG nova.network.neutron [-] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:52:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:52:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/928251055' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:52:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:52:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/928251055' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:52:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 830 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 995 KiB/s rd, 71 MiB/s wr, 244 op/s
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.684 239942 DEBUG nova.compute.manager [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-vif-unplugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.684 239942 DEBUG oslo_concurrency.lockutils [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.685 239942 DEBUG oslo_concurrency.lockutils [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.685 239942 DEBUG oslo_concurrency.lockutils [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.685 239942 DEBUG nova.compute.manager [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] No waiting events found dispatching network-vif-unplugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.686 239942 DEBUG nova.compute.manager [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-vif-unplugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.686 239942 DEBUG nova.compute.manager [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.687 239942 DEBUG oslo_concurrency.lockutils [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.687 239942 DEBUG oslo_concurrency.lockutils [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.688 239942 DEBUG oslo_concurrency.lockutils [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.689 239942 DEBUG nova.compute.manager [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] No waiting events found dispatching network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.689 239942 WARNING nova.compute.manager [req-0d29b400-36ac-4bf8-93ac-7049219ff2f8 req-11b5230c-8de3-43ec-89ea-7303b25216ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received unexpected event network-vif-plugged-c2537bd0-5e4f-4c22-95b4-751b80b76a81 for instance with vm_state active and task_state deleting.#033[00m
Jan 30 23:52:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:15.773 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:52:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:15.775 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.786 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.795 239942 DEBUG nova.network.neutron [-] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.813 239942 INFO nova.compute.manager [-] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Took 0.62 seconds to deallocate network for instance.#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.859 239942 DEBUG nova.compute.manager [req-16f8ad7b-8fa1-4b15-aab6-9c8de1d612f0 req-ec250cef-a14b-4f51-b79a-493ae18bce7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Received event network-vif-deleted-c2537bd0-5e4f-4c22-95b4-751b80b76a81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.861 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.862 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.914 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.915 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:15 np0005603435 nova_compute[239938]: 2026-01-31 04:52:15.915 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.158 239942 DEBUG oslo_concurrency.processutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:52:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1922732186' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.766 239942 DEBUG oslo_concurrency.processutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.772 239942 DEBUG nova.compute.provider_tree [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.792 239942 DEBUG nova.scheduler.client.report [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.812 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.853 239942 INFO nova.scheduler.client.report [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Deleted allocations for instance 175b46aa-ae57-41db-b77d-c8cdb978701f#033[00m
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:16 np0005603435 nova_compute[239938]: 2026-01-31 04:52:16.949 239942 DEBUG oslo_concurrency.lockutils [None req-4cc78721-9a52-4cfe-a183-218822cd1d31 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "175b46aa-ae57-41db-b77d-c8cdb978701f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008531803652249238 of space, bias 1.0, pg target 0.25595410956747716 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0004182396405778872 of space, bias 1.0, pg target 0.12547189217336616 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00937749372761627 of space, bias 1.0, pg target 2.813248118284881 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014249231461832173 of space, bias 1.0, pg target 0.42462709756259875 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.129735512574172e-07 of space, bias 4.0, pg target 0.0008498644730988413 quantized to 16 (current 16)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011370018558312169 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012507020414143388 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015160024744416227 quantized to 32 (current 32)
Jan 30 23:52:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 115 MiB/s wr, 411 op/s
Jan 30 23:52:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 30 23:52:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 30 23:52:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 30 23:52:18 np0005603435 nova_compute[239938]: 2026-01-31 04:52:18.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:18 np0005603435 nova_compute[239938]: 2026-01-31 04:52:18.889 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:52:18 np0005603435 nova_compute[239938]: 2026-01-31 04:52:18.889 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:52:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:52:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1969551704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:52:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:52:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1969551704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:52:19 np0005603435 nova_compute[239938]: 2026-01-31 04:52:19.254 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:52:19 np0005603435 nova_compute[239938]: 2026-01-31 04:52:19.255 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:52:19 np0005603435 nova_compute[239938]: 2026-01-31 04:52:19.255 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 30 23:52:19 np0005603435 nova_compute[239938]: 2026-01-31 04:52:19.256 239942 DEBUG nova.objects.instance [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d99b6e7d-0d41-4261-8dc8-687109c9a0fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:52:19 np0005603435 nova_compute[239938]: 2026-01-31 04:52:19.302 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 104 MiB/s wr, 407 op/s
Jan 30 23:52:19 np0005603435 nova_compute[239938]: 2026-01-31 04:52:19.685 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 30 23:52:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 30 23:52:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.401 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating instance_info_cache with network_info: [{"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.416 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.416 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.417 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.418 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.442 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.442 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.443 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.443 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:52:20 np0005603435 nova_compute[239938]: 2026-01-31 04:52:20.444 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:52:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/779881096' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.006 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.089 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.090 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.287 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.287 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4249MB free_disk=59.94237460568547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.288 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.288 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 2 active+clean+snaptrim, 18 active+clean+snaptrim_wait, 285 active+clean; 934 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 72 MiB/s wr, 245 op/s
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.412 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.412 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.412 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.445 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1112235058' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1112235058' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:52:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:21.777 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:52:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260333798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.951 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.958 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:52:21 np0005603435 nova_compute[239938]: 2026-01-31 04:52:21.975 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:52:22 np0005603435 nova_compute[239938]: 2026-01-31 04:52:22.002 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:52:22 np0005603435 nova_compute[239938]: 2026-01-31 04:52:22.002 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 30 23:52:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 30 23:52:22 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 30 23:52:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 2 active+clean+snaptrim, 18 active+clean+snaptrim_wait, 285 active+clean; 210 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 20 MiB/s wr, 336 op/s
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.304 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.472 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.669 239942 DEBUG nova.compute.manager [req-0b315cfe-b3fd-41f2-9829-26a6e7d1880a req-8499124b-dc24-485c-b377-4969291ee048 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-changed-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.670 239942 DEBUG nova.compute.manager [req-0b315cfe-b3fd-41f2-9829-26a6e7d1880a req-8499124b-dc24-485c-b377-4969291ee048 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Refreshing instance network info cache due to event network-changed-23c441d0-6579-44b1-a27f-a3856db44b73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.670 239942 DEBUG oslo_concurrency.lockutils [req-0b315cfe-b3fd-41f2-9829-26a6e7d1880a req-8499124b-dc24-485c-b377-4969291ee048 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.670 239942 DEBUG oslo_concurrency.lockutils [req-0b315cfe-b3fd-41f2-9829-26a6e7d1880a req-8499124b-dc24-485c-b377-4969291ee048 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.671 239942 DEBUG nova.network.neutron [req-0b315cfe-b3fd-41f2-9829-26a6e7d1880a req-8499124b-dc24-485c-b377-4969291ee048 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Refreshing network info cache for port 23c441d0-6579-44b1-a27f-a3856db44b73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.725 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.736 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.737 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.737 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.738 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.738 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.740 239942 INFO nova.compute.manager [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Terminating instance#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.742 239942 DEBUG nova.compute.manager [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:52:24 np0005603435 kernel: tap23c441d0-65 (unregistering): left promiscuous mode
Jan 30 23:52:24 np0005603435 NetworkManager[49097]: <info>  [1769835144.8118] device (tap23c441d0-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00111|binding|INFO|Releasing lport 23c441d0-6579-44b1-a27f-a3856db44b73 from this chassis (sb_readonly=0)
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00112|binding|INFO|Setting lport 23c441d0-6579-44b1-a27f-a3856db44b73 down in Southbound
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00113|binding|INFO|Removing iface tap23c441d0-65 ovn-installed in OVS
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.816 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:24 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:24.827 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:65:c1 10.100.0.5'], port_security=['fa:16:3e:33:65:c1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd99b6e7d-0d41-4261-8dc8-687109c9a0fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f926501f874644cf9ffda466c84e710b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05382d3e-edd0-4646-aff2-95f9f0df0d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e385d9e-e365-4760-9c59-b6cbbb99eaf1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=23c441d0-6579-44b1-a27f-a3856db44b73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:52:24 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:24.829 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 23c441d0-6579-44b1-a27f-a3856db44b73 in datapath 55d16559-9723-4f0a-a23e-90d04ca1bb05 unbound from our chassis#033[00m
Jan 30 23:52:24 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:24.831 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55d16559-9723-4f0a-a23e-90d04ca1bb05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:52:24 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:24.832 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3b819280-45b0-4f46-8fdf-a61aa8b8e34a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:24 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:24.833 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05 namespace which is not needed anymore#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.833 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:24 np0005603435 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 30 23:52:24 np0005603435 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 15.907s CPU time.
Jan 30 23:52:24 np0005603435 systemd-machined[208030]: Machine qemu-10-instance-0000000a terminated.
Jan 30 23:52:24 np0005603435 podman[258786]: 2026-01-31 04:52:24.928691423 +0000 UTC m=+0.076297244 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:52:24 np0005603435 kernel: tap23c441d0-65: entered promiscuous mode
Jan 30 23:52:24 np0005603435 NetworkManager[49097]: <info>  [1769835144.9579] manager: (tap23c441d0-65): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.959 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:24 np0005603435 kernel: tap23c441d0-65 (unregistering): left promiscuous mode
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00114|binding|INFO|Claiming lport 23c441d0-6579-44b1-a27f-a3856db44b73 for this chassis.
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00115|binding|INFO|23c441d0-6579-44b1-a27f-a3856db44b73: Claiming fa:16:3e:33:65:c1 10.100.0.5
Jan 30 23:52:24 np0005603435 systemd-udevd[258815]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:52:24 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:24.967 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:65:c1 10.100.0.5'], port_security=['fa:16:3e:33:65:c1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd99b6e7d-0d41-4261-8dc8-687109c9a0fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f926501f874644cf9ffda466c84e710b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05382d3e-edd0-4646-aff2-95f9f0df0d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e385d9e-e365-4760-9c59-b6cbbb99eaf1, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=23c441d0-6579-44b1-a27f-a3856db44b73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00116|binding|INFO|Setting lport 23c441d0-6579-44b1-a27f-a3856db44b73 ovn-installed in OVS
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00117|binding|INFO|Setting lport 23c441d0-6579-44b1-a27f-a3856db44b73 up in Southbound
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00118|binding|INFO|Releasing lport 23c441d0-6579-44b1-a27f-a3856db44b73 from this chassis (sb_readonly=1)
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00119|if_status|INFO|Not setting lport 23c441d0-6579-44b1-a27f-a3856db44b73 down as sb is readonly
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00120|binding|INFO|Removing iface tap23c441d0-65 ovn-installed in OVS
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.972 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00121|binding|INFO|Releasing lport 23c441d0-6579-44b1-a27f-a3856db44b73 from this chassis (sb_readonly=0)
Jan 30 23:52:24 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:24Z|00122|binding|INFO|Setting lport 23c441d0-6579-44b1-a27f-a3856db44b73 down in Southbound
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.982 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.984 239942 INFO nova.virt.libvirt.driver [-] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Instance destroyed successfully.#033[00m
Jan 30 23:52:24 np0005603435 nova_compute[239938]: 2026-01-31 04:52:24.984 239942 DEBUG nova.objects.instance [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lazy-loading 'resources' on Instance uuid d99b6e7d-0d41-4261-8dc8-687109c9a0fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:52:24 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:24.984 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:65:c1 10.100.0.5'], port_security=['fa:16:3e:33:65:c1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd99b6e7d-0d41-4261-8dc8-687109c9a0fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f926501f874644cf9ffda466c84e710b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05382d3e-edd0-4646-aff2-95f9f0df0d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e385d9e-e365-4760-9c59-b6cbbb99eaf1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=23c441d0-6579-44b1-a27f-a3856db44b73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:52:24 np0005603435 podman[258789]: 2026-01-31 04:52:24.991984257 +0000 UTC m=+0.134343809 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.004 239942 DEBUG nova.virt.libvirt.vif [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1521065022',display_name='tempest-TestStampPattern-server-1521065022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1521065022',id=10,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE36VHJ+yy2SXlSY6zQF7e9BvcMYc8SPWqyVX2ZxItgDCfKt1gLcAFRAPVxsIPrChTqlOOAcxm0TrregMrTGHoD8jXmVh+9yf3UY3pMaZlSN/M9091Lc3gRO27izGQve6Q==',key_name='tempest-TestStampPattern-1698214235',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:50:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f926501f874644cf9ffda466c84e710b',ramdisk_id='',reservation_id='r-ao1tdelu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-567815244',owner_user_name='tempest-TestStampPattern-567815244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:51:19Z,user_data=None,user_id='bb6c7d8ff99f43cb94670fd4096d652a',uuid=d99b6e7d-0d41-4261-8dc8-687109c9a0fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.004 239942 DEBUG nova.network.os_vif_util [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converting VIF {"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.005 239942 DEBUG nova.network.os_vif_util [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:65:c1,bridge_name='br-int',has_traffic_filtering=True,id=23c441d0-6579-44b1-a27f-a3856db44b73,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23c441d0-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.006 239942 DEBUG os_vif [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:65:c1,bridge_name='br-int',has_traffic_filtering=True,id=23c441d0-6579-44b1-a27f-a3856db44b73,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23c441d0-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.009 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.010 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23c441d0-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.012 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:25 np0005603435 neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05[257265]: [NOTICE]   (257270) : haproxy version is 2.8.14-c23fe91
Jan 30 23:52:25 np0005603435 neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05[257265]: [NOTICE]   (257270) : path to executable is /usr/sbin/haproxy
Jan 30 23:52:25 np0005603435 neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05[257265]: [WARNING]  (257270) : Exiting Master process...
Jan 30 23:52:25 np0005603435 neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05[257265]: [WARNING]  (257270) : Exiting Master process...
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.014 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:52:25 np0005603435 neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05[257265]: [ALERT]    (257270) : Current worker (257272) exited with code 143 (Terminated)
Jan 30 23:52:25 np0005603435 neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05[257265]: [WARNING]  (257270) : All workers exited. Exiting... (0)
Jan 30 23:52:25 np0005603435 systemd[1]: libpod-3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a.scope: Deactivated successfully.
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.017 239942 INFO os_vif [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:65:c1,bridge_name='br-int',has_traffic_filtering=True,id=23c441d0-6579-44b1-a27f-a3856db44b73,network=Network(55d16559-9723-4f0a-a23e-90d04ca1bb05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23c441d0-65')#033[00m
Jan 30 23:52:25 np0005603435 podman[258847]: 2026-01-31 04:52:25.024142296 +0000 UTC m=+0.072317516 container died 3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:52:25 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a-userdata-shm.mount: Deactivated successfully.
Jan 30 23:52:25 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1074f65d3f52a38b156a3f7633b5bdcb82ed492f07d9ebe71c731c6826f9f27d-merged.mount: Deactivated successfully.
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.095 239942 DEBUG nova.compute.manager [req-d0b4feb9-ce9f-4589-a6ce-eaf438c6095e req-1c3fb9cb-c7ac-4ec4-9a23-dcd14f1a9c07 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-unplugged-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.096 239942 DEBUG oslo_concurrency.lockutils [req-d0b4feb9-ce9f-4589-a6ce-eaf438c6095e req-1c3fb9cb-c7ac-4ec4-9a23-dcd14f1a9c07 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.097 239942 DEBUG oslo_concurrency.lockutils [req-d0b4feb9-ce9f-4589-a6ce-eaf438c6095e req-1c3fb9cb-c7ac-4ec4-9a23-dcd14f1a9c07 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.097 239942 DEBUG oslo_concurrency.lockutils [req-d0b4feb9-ce9f-4589-a6ce-eaf438c6095e req-1c3fb9cb-c7ac-4ec4-9a23-dcd14f1a9c07 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.098 239942 DEBUG nova.compute.manager [req-d0b4feb9-ce9f-4589-a6ce-eaf438c6095e req-1c3fb9cb-c7ac-4ec4-9a23-dcd14f1a9c07 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] No waiting events found dispatching network-vif-unplugged-23c441d0-6579-44b1-a27f-a3856db44b73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.098 239942 DEBUG nova.compute.manager [req-d0b4feb9-ce9f-4589-a6ce-eaf438c6095e req-1c3fb9cb-c7ac-4ec4-9a23-dcd14f1a9c07 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-unplugged-23c441d0-6579-44b1-a27f-a3856db44b73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:52:25 np0005603435 podman[258847]: 2026-01-31 04:52:25.177286926 +0000 UTC m=+0.225462186 container cleanup 3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 30 23:52:25 np0005603435 systemd[1]: libpod-conmon-3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a.scope: Deactivated successfully.
Jan 30 23:52:25 np0005603435 podman[258905]: 2026-01-31 04:52:25.266611249 +0000 UTC m=+0.062136716 container remove 3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.270 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c7ca8b-0834-4181-b741-094b0ff98ef0]: (4, ('Sat Jan 31 04:52:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05 (3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a)\n3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a\nSat Jan 31 04:52:25 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05 (3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a)\n3b8f9563e8bd9129b7843073ccf6a5313fa6755cb45525d046b18e5e432d857a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.272 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb35a63-f4f8-4b3d-986d-daf2815f2d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.273 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d16559-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.274 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:25 np0005603435 kernel: tap55d16559-90: left promiscuous mode
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.275 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.278 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7f08d3-ffd5-4d1c-9341-ca07128e123a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.282 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.289 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[78a3a59e-735b-438d-8eaf-774eb787d3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.291 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b88c97-1266-4ef8-9406-36cc48ba163f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.303 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[11066061-4b2a-4529-b3f3-d777776b8754]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409808, 'reachable_time': 19388, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258921, 'error': None, 'target': 'ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.307 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55d16559-9723-4f0a-a23e-90d04ca1bb05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:52:25 np0005603435 systemd[1]: run-netns-ovnmeta\x2d55d16559\x2d9723\x2d4f0a\x2da23e\x2d90d04ca1bb05.mount: Deactivated successfully.
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.307 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7e008a-6a90-4618-a2a2-d67acb77e065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.308 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 23c441d0-6579-44b1-a27f-a3856db44b73 in datapath 55d16559-9723-4f0a-a23e-90d04ca1bb05 unbound from our chassis#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.309 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55d16559-9723-4f0a-a23e-90d04ca1bb05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.310 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f8138479-5628-4efb-86a2-f96425544b6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.310 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 23c441d0-6579-44b1-a27f-a3856db44b73 in datapath 55d16559-9723-4f0a-a23e-90d04ca1bb05 unbound from our chassis#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.311 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55d16559-9723-4f0a-a23e-90d04ca1bb05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:52:25 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:25.312 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[983d8fbc-f149-4066-a198-b42e7258cf66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 160 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 952 KiB/s rd, 18 MiB/s wr, 318 op/s
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.415 239942 INFO nova.virt.libvirt.driver [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Deleting instance files /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa_del#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.416 239942 INFO nova.virt.libvirt.driver [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Deletion of /var/lib/nova/instances/d99b6e7d-0d41-4261-8dc8-687109c9a0fa_del complete#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.495 239942 INFO nova.compute.manager [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.497 239942 DEBUG oslo.service.loopingcall [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.498 239942 DEBUG nova.compute.manager [-] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:52:25 np0005603435 nova_compute[239938]: 2026-01-31 04:52:25.498 239942 DEBUG nova.network.neutron [-] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.165 239942 DEBUG nova.network.neutron [-] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.187 239942 INFO nova.compute.manager [-] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Took 0.69 seconds to deallocate network for instance.#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.242 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.242 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.304 239942 DEBUG oslo_concurrency.processutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.426 239942 DEBUG nova.network.neutron [req-0b315cfe-b3fd-41f2-9829-26a6e7d1880a req-8499124b-dc24-485c-b377-4969291ee048 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updated VIF entry in instance network info cache for port 23c441d0-6579-44b1-a27f-a3856db44b73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.427 239942 DEBUG nova.network.neutron [req-0b315cfe-b3fd-41f2-9829-26a6e7d1880a req-8499124b-dc24-485c-b377-4969291ee048 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating instance_info_cache with network_info: [{"id": "23c441d0-6579-44b1-a27f-a3856db44b73", "address": "fa:16:3e:33:65:c1", "network": {"id": "55d16559-9723-4f0a-a23e-90d04ca1bb05", "bridge": "br-int", "label": "tempest-TestStampPattern-1726773278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f926501f874644cf9ffda466c84e710b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23c441d0-65", "ovs_interfaceid": "23c441d0-6579-44b1-a27f-a3856db44b73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.575 239942 DEBUG oslo_concurrency.lockutils [req-0b315cfe-b3fd-41f2-9829-26a6e7d1880a req-8499124b-dc24-485c-b377-4969291ee048 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-d99b6e7d-0d41-4261-8dc8-687109c9a0fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.819 239942 DEBUG nova.compute.manager [req-5d478472-c37c-45d8-8048-6d723abd6072 req-1703ed69-9586-4e3a-bf4e-9e6cf25eb38a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-deleted-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.820 239942 INFO nova.compute.manager [req-5d478472-c37c-45d8-8048-6d723abd6072 req-1703ed69-9586-4e3a-bf4e-9e6cf25eb38a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Neutron deleted interface 23c441d0-6579-44b1-a27f-a3856db44b73; detaching it from the instance and deleting it from the info cache#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.820 239942 DEBUG nova.network.neutron [req-5d478472-c37c-45d8-8048-6d723abd6072 req-1703ed69-9586-4e3a-bf4e-9e6cf25eb38a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.842 239942 DEBUG nova.compute.manager [req-5d478472-c37c-45d8-8048-6d723abd6072 req-1703ed69-9586-4e3a-bf4e-9e6cf25eb38a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Detach interface failed, port_id=23c441d0-6579-44b1-a27f-a3856db44b73, reason: Instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 30 23:52:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:52:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2273644744' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.874 239942 DEBUG oslo_concurrency.processutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.881 239942 DEBUG nova.compute.provider_tree [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.900 239942 DEBUG nova.scheduler.client.report [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.926 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:26 np0005603435 nova_compute[239938]: 2026-01-31 04:52:26.964 239942 INFO nova.scheduler.client.report [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Deleted allocations for instance d99b6e7d-0d41-4261-8dc8-687109c9a0fa#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.029 239942 DEBUG oslo_concurrency.lockutils [None req-082b6b84-0e36-4930-8ff4-08bd7719f795 bb6c7d8ff99f43cb94670fd4096d652a f926501f874644cf9ffda466c84e710b - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.158 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.159 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.159 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.159 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.159 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] No waiting events found dispatching network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.160 239942 WARNING nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received unexpected event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.160 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.160 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.160 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.160 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.160 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] No waiting events found dispatching network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.161 239942 WARNING nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received unexpected event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.161 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.161 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.161 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.161 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.162 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] No waiting events found dispatching network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.162 239942 WARNING nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received unexpected event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.162 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-unplugged-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.162 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.162 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.163 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.163 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] No waiting events found dispatching network-vif-unplugged-23c441d0-6579-44b1-a27f-a3856db44b73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.163 239942 WARNING nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received unexpected event network-vif-unplugged-23c441d0-6579-44b1-a27f-a3856db44b73 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.163 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.163 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.163 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.164 239942 DEBUG oslo_concurrency.lockutils [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "d99b6e7d-0d41-4261-8dc8-687109c9a0fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.164 239942 DEBUG nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] No waiting events found dispatching network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:27 np0005603435 nova_compute[239938]: 2026-01-31 04:52:27.164 239942 WARNING nova.compute.manager [req-43c9d585-dc58-4e79-8410-943b6975a2f0 req-53bb987a-3f13-4800-8d89-c964ef079527 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Received unexpected event network-vif-plugged-23c441d0-6579-44b1-a27f-a3856db44b73 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:52:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 104 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 15 MiB/s wr, 310 op/s
Jan 30 23:52:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Jan 30 23:52:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Jan 30 23:52:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Jan 30 23:52:28 np0005603435 podman[259040]: 2026-01-31 04:52:28.134268123 +0000 UTC m=+0.072288156 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 30 23:52:28 np0005603435 podman[259040]: 2026-01-31 04:52:28.268203831 +0000 UTC m=+0.206223834 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:52:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:52:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/191032951' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:52:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:52:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/191032951' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:29 np0005603435 nova_compute[239938]: 2026-01-31 04:52:29.274 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835134.2733643, 175b46aa-ae57-41db-b77d-c8cdb978701f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:52:29 np0005603435 nova_compute[239938]: 2026-01-31 04:52:29.275 239942 INFO nova.compute.manager [-] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:52:29 np0005603435 nova_compute[239938]: 2026-01-31 04:52:29.306 239942 DEBUG nova.compute.manager [None req-429b431b-a436-40fc-9ed8-914ebaa1a038 - - - - - -] [instance: 175b46aa-ae57-41db-b77d-c8cdb978701f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:52:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 104 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.5 MiB/s wr, 263 op/s
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:29 np0005603435 nova_compute[239938]: 2026-01-31 04:52:29.727 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:52:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:52:30 np0005603435 nova_compute[239938]: 2026-01-31 04:52:30.012 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:30 np0005603435 podman[259372]: 2026-01-31 04:52:30.160638563 +0000 UTC m=+0.062547926 container create 67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 30 23:52:30 np0005603435 systemd[1]: Started libpod-conmon-67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086.scope.
Jan 30 23:52:30 np0005603435 podman[259372]: 2026-01-31 04:52:30.135988798 +0000 UTC m=+0.037898221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:52:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:52:30 np0005603435 podman[259372]: 2026-01-31 04:52:30.254882247 +0000 UTC m=+0.156791650 container init 67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:52:30 np0005603435 podman[259372]: 2026-01-31 04:52:30.263054808 +0000 UTC m=+0.164964141 container start 67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:52:30 np0005603435 podman[259372]: 2026-01-31 04:52:30.268468771 +0000 UTC m=+0.170378174 container attach 67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hugle, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:52:30 np0005603435 upbeat_hugle[259388]: 167 167
Jan 30 23:52:30 np0005603435 systemd[1]: libpod-67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086.scope: Deactivated successfully.
Jan 30 23:52:30 np0005603435 podman[259372]: 2026-01-31 04:52:30.269628729 +0000 UTC m=+0.171538092 container died 67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hugle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:52:30 np0005603435 systemd[1]: var-lib-containers-storage-overlay-6ad17ada175e0728cf83d3355dcaa674de632ed05179acfbc94c52a47894fdca-merged.mount: Deactivated successfully.
Jan 30 23:52:30 np0005603435 podman[259372]: 2026-01-31 04:52:30.329172211 +0000 UTC m=+0.231081574 container remove 67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hugle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:52:30 np0005603435 systemd[1]: libpod-conmon-67814ec35ab578932daa3743f26f068959b6b29a815ce822e1eec5452d58f086.scope: Deactivated successfully.
Jan 30 23:52:30 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:52:30 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:30 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:52:30 np0005603435 podman[259412]: 2026-01-31 04:52:30.489869246 +0000 UTC m=+0.047732793 container create 6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:52:30 np0005603435 systemd[1]: Started libpod-conmon-6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9.scope.
Jan 30 23:52:30 np0005603435 podman[259412]: 2026-01-31 04:52:30.463254563 +0000 UTC m=+0.021118110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:52:30 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:52:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeda6746e3744f8762ff640cd1a10c48807dc47a1c03058f5e87995a8411ea5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeda6746e3744f8762ff640cd1a10c48807dc47a1c03058f5e87995a8411ea5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeda6746e3744f8762ff640cd1a10c48807dc47a1c03058f5e87995a8411ea5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeda6746e3744f8762ff640cd1a10c48807dc47a1c03058f5e87995a8411ea5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeda6746e3744f8762ff640cd1a10c48807dc47a1c03058f5e87995a8411ea5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:30 np0005603435 podman[259412]: 2026-01-31 04:52:30.587866132 +0000 UTC m=+0.145729689 container init 6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:52:30 np0005603435 podman[259412]: 2026-01-31 04:52:30.60163937 +0000 UTC m=+0.159502907 container start 6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:52:30 np0005603435 podman[259412]: 2026-01-31 04:52:30.60652657 +0000 UTC m=+0.164390157 container attach 6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:52:31 np0005603435 condescending_poincare[259429]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:52:31 np0005603435 condescending_poincare[259429]: --> All data devices are unavailable
Jan 30 23:52:31 np0005603435 systemd[1]: libpod-6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9.scope: Deactivated successfully.
Jan 30 23:52:31 np0005603435 podman[259412]: 2026-01-31 04:52:31.103116741 +0000 UTC m=+0.660980278 container died 6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:52:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-eeda6746e3744f8762ff640cd1a10c48807dc47a1c03058f5e87995a8411ea5b-merged.mount: Deactivated successfully.
Jan 30 23:52:31 np0005603435 podman[259412]: 2026-01-31 04:52:31.155805085 +0000 UTC m=+0.713668622 container remove 6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:52:31 np0005603435 systemd[1]: libpod-conmon-6f210a6d28924e7817bd1a2eda4a1d31a27e937841aff48f0d8091d9f237fbb9.scope: Deactivated successfully.
Jan 30 23:52:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 94 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 68 KiB/s wr, 77 op/s
Jan 30 23:52:31 np0005603435 podman[259523]: 2026-01-31 04:52:31.656893207 +0000 UTC m=+0.054040718 container create 30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_shaw, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:52:31 np0005603435 systemd[1]: Started libpod-conmon-30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6.scope.
Jan 30 23:52:31 np0005603435 nova_compute[239938]: 2026-01-31 04:52:31.703 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:31 np0005603435 podman[259523]: 2026-01-31 04:52:31.633315628 +0000 UTC m=+0.030463189 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:52:31 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:52:31 np0005603435 podman[259523]: 2026-01-31 04:52:31.752146436 +0000 UTC m=+0.149293947 container init 30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:52:31 np0005603435 podman[259523]: 2026-01-31 04:52:31.759964738 +0000 UTC m=+0.157112219 container start 30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_shaw, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:52:31 np0005603435 podman[259523]: 2026-01-31 04:52:31.763992207 +0000 UTC m=+0.161139728 container attach 30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:52:31 np0005603435 kind_shaw[259538]: 167 167
Jan 30 23:52:31 np0005603435 systemd[1]: libpod-30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6.scope: Deactivated successfully.
Jan 30 23:52:31 np0005603435 podman[259523]: 2026-01-31 04:52:31.768433026 +0000 UTC m=+0.165580537 container died 30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_shaw, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:52:31 np0005603435 nova_compute[239938]: 2026-01-31 04:52:31.795 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-47062c1fce9bd501a9c78a84fe160b7693ead0d332521ddb9009a02cd05d4eb9-merged.mount: Deactivated successfully.
Jan 30 23:52:31 np0005603435 podman[259523]: 2026-01-31 04:52:31.824858491 +0000 UTC m=+0.222006002 container remove 30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_shaw, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:52:31 np0005603435 systemd[1]: libpod-conmon-30606a2220f1175d0454c0b57f973d720546146d243e72c28af086b8138ae5c6.scope: Deactivated successfully.
Jan 30 23:52:32 np0005603435 podman[259565]: 2026-01-31 04:52:32.001756644 +0000 UTC m=+0.062813163 container create 4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:52:32 np0005603435 systemd[1]: Started libpod-conmon-4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e.scope.
Jan 30 23:52:32 np0005603435 podman[259565]: 2026-01-31 04:52:31.977367245 +0000 UTC m=+0.038423784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:52:32 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:52:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee577e8174a0578b813c7c9c71ed7d33f8e247a2ad1161917e0e86ce487f117f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee577e8174a0578b813c7c9c71ed7d33f8e247a2ad1161917e0e86ce487f117f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee577e8174a0578b813c7c9c71ed7d33f8e247a2ad1161917e0e86ce487f117f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:32 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee577e8174a0578b813c7c9c71ed7d33f8e247a2ad1161917e0e86ce487f117f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:32 np0005603435 podman[259565]: 2026-01-31 04:52:32.110912424 +0000 UTC m=+0.171968953 container init 4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:52:32 np0005603435 podman[259565]: 2026-01-31 04:52:32.119408413 +0000 UTC m=+0.180464932 container start 4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:52:32 np0005603435 podman[259565]: 2026-01-31 04:52:32.122686313 +0000 UTC m=+0.183742892 container attach 4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 30 23:52:32 np0005603435 exciting_newton[259581]: {
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:    "0": [
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:        {
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "devices": [
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "/dev/loop3"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            ],
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_name": "ceph_lv0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_size": "21470642176",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "name": "ceph_lv0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "tags": {
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cluster_name": "ceph",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.crush_device_class": "",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.encrypted": "0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.objectstore": "bluestore",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osd_id": "0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.type": "block",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.vdo": "0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.with_tpm": "0"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            },
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "type": "block",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "vg_name": "ceph_vg0"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:        }
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:    ],
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:    "1": [
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:        {
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "devices": [
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "/dev/loop4"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            ],
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_name": "ceph_lv1",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_size": "21470642176",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "name": "ceph_lv1",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "tags": {
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cluster_name": "ceph",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.crush_device_class": "",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.encrypted": "0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.objectstore": "bluestore",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osd_id": "1",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.type": "block",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.vdo": "0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.with_tpm": "0"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            },
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "type": "block",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "vg_name": "ceph_vg1"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:        }
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:    ],
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:    "2": [
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:        {
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "devices": [
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "/dev/loop5"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            ],
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_name": "ceph_lv2",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_size": "21470642176",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "name": "ceph_lv2",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "tags": {
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.cluster_name": "ceph",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.crush_device_class": "",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.encrypted": "0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.objectstore": "bluestore",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osd_id": "2",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.type": "block",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.vdo": "0",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:                "ceph.with_tpm": "0"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            },
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "type": "block",
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:            "vg_name": "ceph_vg2"
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:        }
Jan 30 23:52:32 np0005603435 exciting_newton[259581]:    ]
Jan 30 23:52:32 np0005603435 exciting_newton[259581]: }
Jan 30 23:52:32 np0005603435 systemd[1]: libpod-4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e.scope: Deactivated successfully.
Jan 30 23:52:32 np0005603435 podman[259565]: 2026-01-31 04:52:32.415483862 +0000 UTC m=+0.476540391 container died 4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 30 23:52:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ee577e8174a0578b813c7c9c71ed7d33f8e247a2ad1161917e0e86ce487f117f-merged.mount: Deactivated successfully.
Jan 30 23:52:32 np0005603435 podman[259565]: 2026-01-31 04:52:32.472107972 +0000 UTC m=+0.533164501 container remove 4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:52:32 np0005603435 systemd[1]: libpod-conmon-4228b06704a5df153355d0aaa8a5c6b00351e2e6ca44e3df6b0c0ee6e426c09e.scope: Deactivated successfully.
Jan 30 23:52:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:33 np0005603435 podman[259663]: 2026-01-31 04:52:33.003671402 +0000 UTC m=+0.057973414 container create d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:52:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:52:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3772642320' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:52:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:52:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3772642320' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:52:33 np0005603435 systemd[1]: Started libpod-conmon-d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3.scope.
Jan 30 23:52:33 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:52:33 np0005603435 podman[259663]: 2026-01-31 04:52:32.977356976 +0000 UTC m=+0.031659058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:52:33 np0005603435 podman[259663]: 2026-01-31 04:52:33.088584857 +0000 UTC m=+0.142886909 container init d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:52:33 np0005603435 podman[259663]: 2026-01-31 04:52:33.096365098 +0000 UTC m=+0.150667120 container start d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:52:33 np0005603435 podman[259663]: 2026-01-31 04:52:33.100348366 +0000 UTC m=+0.154650378 container attach d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:52:33 np0005603435 systemd[1]: libpod-d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3.scope: Deactivated successfully.
Jan 30 23:52:33 np0005603435 thirsty_jackson[259679]: 167 167
Jan 30 23:52:33 np0005603435 conmon[259679]: conmon d37bb169d232f1121d52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3.scope/container/memory.events
Jan 30 23:52:33 np0005603435 podman[259663]: 2026-01-31 04:52:33.102800076 +0000 UTC m=+0.157102088 container died d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:52:33 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3a08be39012c335e32d91171926e897f78fc21926ee7bdae084571bd0f1a5006-merged.mount: Deactivated successfully.
Jan 30 23:52:33 np0005603435 podman[259663]: 2026-01-31 04:52:33.146831507 +0000 UTC m=+0.201133499 container remove d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 30 23:52:33 np0005603435 systemd[1]: libpod-conmon-d37bb169d232f1121d520c5e76c47dac93c8303c5bd3427ead395d84183e9fe3.scope: Deactivated successfully.
Jan 30 23:52:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 134 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Jan 30 23:52:33 np0005603435 podman[259704]: 2026-01-31 04:52:33.281061043 +0000 UTC m=+0.035680497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:52:33 np0005603435 podman[259704]: 2026-01-31 04:52:33.525086284 +0000 UTC m=+0.279705698 container create 9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:52:33 np0005603435 systemd[1]: Started libpod-conmon-9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a.scope.
Jan 30 23:52:33 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:52:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49be63acd7581aba32bd1335662f3bd569211410c558645a84032c6449dee699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49be63acd7581aba32bd1335662f3bd569211410c558645a84032c6449dee699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49be63acd7581aba32bd1335662f3bd569211410c558645a84032c6449dee699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49be63acd7581aba32bd1335662f3bd569211410c558645a84032c6449dee699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:33 np0005603435 podman[259704]: 2026-01-31 04:52:33.639444311 +0000 UTC m=+0.394063725 container init 9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:52:33 np0005603435 podman[259704]: 2026-01-31 04:52:33.649870357 +0000 UTC m=+0.404489771 container start 9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:52:33 np0005603435 podman[259704]: 2026-01-31 04:52:33.65282464 +0000 UTC m=+0.407444094 container attach 9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 30 23:52:33 np0005603435 nova_compute[239938]: 2026-01-31 04:52:33.790 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:33 np0005603435 nova_compute[239938]: 2026-01-31 04:52:33.793 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:33 np0005603435 nova_compute[239938]: 2026-01-31 04:52:33.809 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:52:33 np0005603435 nova_compute[239938]: 2026-01-31 04:52:33.896 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:33 np0005603435 nova_compute[239938]: 2026-01-31 04:52:33.897 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:33 np0005603435 nova_compute[239938]: 2026-01-31 04:52:33.911 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:52:33 np0005603435 nova_compute[239938]: 2026-01-31 04:52:33.911 239942 INFO nova.compute.claims [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.024 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:34 np0005603435 lvm[259819]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:52:34 np0005603435 lvm[259819]: VG ceph_vg1 finished
Jan 30 23:52:34 np0005603435 lvm[259818]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:52:34 np0005603435 lvm[259818]: VG ceph_vg0 finished
Jan 30 23:52:34 np0005603435 lvm[259821]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:52:34 np0005603435 lvm[259821]: VG ceph_vg2 finished
Jan 30 23:52:34 np0005603435 beautiful_tu[259720]: {}
Jan 30 23:52:34 np0005603435 systemd[1]: libpod-9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a.scope: Deactivated successfully.
Jan 30 23:52:34 np0005603435 podman[259704]: 2026-01-31 04:52:34.342560453 +0000 UTC m=+1.097179877 container died 9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:52:34 np0005603435 systemd[1]: libpod-9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a.scope: Consumed 1.025s CPU time.
Jan 30 23:52:34 np0005603435 systemd[1]: var-lib-containers-storage-overlay-49be63acd7581aba32bd1335662f3bd569211410c558645a84032c6449dee699-merged.mount: Deactivated successfully.
Jan 30 23:52:34 np0005603435 podman[259704]: 2026-01-31 04:52:34.380356931 +0000 UTC m=+1.134976355 container remove 9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:52:34 np0005603435 systemd[1]: libpod-conmon-9e57d5af85606f657adbda3fc77c8fe68e8293899d724971a3336f80ccc97f5a.scope: Deactivated successfully.
Jan 30 23:52:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:52:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:52:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:52:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278198118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:52:34 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:34 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.570 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.578 239942 DEBUG nova.compute.provider_tree [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.596 239942 DEBUG nova.scheduler.client.report [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.618 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.619 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.659 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.659 239942 DEBUG nova.network.neutron [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.676 239942 INFO nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.759 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.766 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.969 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.972 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:52:34 np0005603435 nova_compute[239938]: 2026-01-31 04:52:34.972 239942 INFO nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Creating image(s)#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.003 239942 DEBUG nova.storage.rbd_utils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 983eb240-9938-4bbf-aafb-2562f4738906_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.036 239942 DEBUG nova.storage.rbd_utils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 983eb240-9938-4bbf-aafb-2562f4738906_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.067 239942 DEBUG nova.storage.rbd_utils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 983eb240-9938-4bbf-aafb-2562f4738906_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.071 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.089 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.131 239942 DEBUG nova.policy [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3612e26aca645d895f083e0d58dfd69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5ce1f57546045d891de80fbaff2512b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.143 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.144 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.145 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.145 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.178 239942 DEBUG nova.storage.rbd_utils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 983eb240-9938-4bbf-aafb-2562f4738906_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.182 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 983eb240-9938-4bbf-aafb-2562f4738906_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 134 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.462 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 983eb240-9938-4bbf-aafb-2562f4738906_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.530 239942 DEBUG nova.storage.rbd_utils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] resizing rbd image 983eb240-9938-4bbf-aafb-2562f4738906_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.605 239942 DEBUG nova.objects.instance [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'migration_context' on Instance uuid 983eb240-9938-4bbf-aafb-2562f4738906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.707 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.708 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Ensure instance console log exists: /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.709 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.709 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:35 np0005603435 nova_compute[239938]: 2026-01-31 04:52:35.710 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:36 np0005603435 nova_compute[239938]: 2026-01-31 04:52:36.694 239942 DEBUG nova.network.neutron [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Successfully created port: bfb7f68c-f1da-410f-b21f-2f029c653727 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:52:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:52:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:52:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:52:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:52:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:52:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:52:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:52:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2904027922' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:52:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:52:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2904027922' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:52:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 161 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.4 MiB/s wr, 98 op/s
Jan 30 23:52:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:37 np0005603435 nova_compute[239938]: 2026-01-31 04:52:37.980 239942 DEBUG nova.network.neutron [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Successfully updated port: bfb7f68c-f1da-410f-b21f-2f029c653727 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:52:37 np0005603435 nova_compute[239938]: 2026-01-31 04:52:37.995 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:52:37 np0005603435 nova_compute[239938]: 2026-01-31 04:52:37.995 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquired lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:52:37 np0005603435 nova_compute[239938]: 2026-01-31 04:52:37.996 239942 DEBUG nova.network.neutron [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:52:38 np0005603435 nova_compute[239938]: 2026-01-31 04:52:38.125 239942 DEBUG nova.compute.manager [req-8a31679b-f79f-4349-bb18-20435351ebac req-440cff61-5304-4ada-8df9-add5d72d83ce c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received event network-changed-bfb7f68c-f1da-410f-b21f-2f029c653727 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:38 np0005603435 nova_compute[239938]: 2026-01-31 04:52:38.126 239942 DEBUG nova.compute.manager [req-8a31679b-f79f-4349-bb18-20435351ebac req-440cff61-5304-4ada-8df9-add5d72d83ce c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Refreshing instance network info cache due to event network-changed-bfb7f68c-f1da-410f-b21f-2f029c653727. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:52:38 np0005603435 nova_compute[239938]: 2026-01-31 04:52:38.127 239942 DEBUG oslo_concurrency.lockutils [req-8a31679b-f79f-4349-bb18-20435351ebac req-440cff61-5304-4ada-8df9-add5d72d83ce c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:52:38 np0005603435 nova_compute[239938]: 2026-01-31 04:52:38.235 239942 DEBUG nova.network.neutron [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:52:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 161 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.0 MiB/s wr, 86 op/s
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.668 239942 DEBUG nova.network.neutron [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Updating instance_info_cache with network_info: [{"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.697 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Releasing lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.697 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Instance network_info: |[{"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.698 239942 DEBUG oslo_concurrency.lockutils [req-8a31679b-f79f-4349-bb18-20435351ebac req-440cff61-5304-4ada-8df9-add5d72d83ce c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.698 239942 DEBUG nova.network.neutron [req-8a31679b-f79f-4349-bb18-20435351ebac req-440cff61-5304-4ada-8df9-add5d72d83ce c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Refreshing network info cache for port bfb7f68c-f1da-410f-b21f-2f029c653727 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.704 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Start _get_guest_xml network_info=[{"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.761 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.979 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835144.9764876, d99b6e7d-0d41-4261-8dc8-687109c9a0fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:52:39 np0005603435 nova_compute[239938]: 2026-01-31 04:52:39.979 239942 INFO nova.compute.manager [-] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.003 239942 DEBUG nova.compute.manager [None req-7223155c-b30c-40b0-83de-318a95ab7c8e - - - - - -] [instance: d99b6e7d-0d41-4261-8dc8-687109c9a0fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.091 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.482 239942 WARNING nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.494 239942 DEBUG nova.virt.libvirt.host [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.495 239942 DEBUG nova.virt.libvirt.host [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.499 239942 DEBUG nova.virt.libvirt.host [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.499 239942 DEBUG nova.virt.libvirt.host [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.500 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.501 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.501 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.502 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.502 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.503 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.503 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.503 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.504 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.504 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.505 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.505 239942 DEBUG nova.virt.hardware [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:52:40 np0005603435 nova_compute[239938]: 2026-01-31 04:52:40.510 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.050 239942 DEBUG nova.network.neutron [req-8a31679b-f79f-4349-bb18-20435351ebac req-440cff61-5304-4ada-8df9-add5d72d83ce c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Updated VIF entry in instance network info cache for port bfb7f68c-f1da-410f-b21f-2f029c653727. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.051 239942 DEBUG nova.network.neutron [req-8a31679b-f79f-4349-bb18-20435351ebac req-440cff61-5304-4ada-8df9-add5d72d83ce c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Updating instance_info_cache with network_info: [{"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.087 239942 DEBUG oslo_concurrency.lockutils [req-8a31679b-f79f-4349-bb18-20435351ebac req-440cff61-5304-4ada-8df9-add5d72d83ce c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:52:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:52:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/121620065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.114 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.151 239942 DEBUG nova.storage.rbd_utils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 983eb240-9938-4bbf-aafb-2562f4738906_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.155 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 180 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 MiB/s wr, 84 op/s
Jan 30 23:52:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:52:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526841240' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.678 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.680 239942 DEBUG nova.virt.libvirt.vif [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:52:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-645802615',display_name='tempest-VolumesSnapshotTestJSON-instance-645802615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-645802615',id=12,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTPMmmOrA1f2jVTyA3tGLGMjAySp4aZ+VdZ8fjet3RqKBb0/kjoG0doqPFnesR+EfLEOfN+cvcwJJGpcSru7QxHSjki1L2h/tvVtt9benX3uAbjfsIDU2hfLwoHUyWsJg==',key_name='tempest-keypair-1786148444',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5ce1f57546045d891de80fbaff2512b',ramdisk_id='',reservation_id='r-pydrtovt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-541584434',owner_user_name='tempest-VolumesSnapshotTestJSON-541584434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:52:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3612e26aca645d895f083e0d58dfd69',uuid=983eb240-9938-4bbf-aafb-2562f4738906,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.681 239942 DEBUG nova.network.os_vif_util [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converting VIF {"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.682 239942 DEBUG nova.network.os_vif_util [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ec:a1,bridge_name='br-int',has_traffic_filtering=True,id=bfb7f68c-f1da-410f-b21f-2f029c653727,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfb7f68c-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.683 239942 DEBUG nova.objects.instance [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'pci_devices' on Instance uuid 983eb240-9938-4bbf-aafb-2562f4738906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.758 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <uuid>983eb240-9938-4bbf-aafb-2562f4738906</uuid>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <name>instance-0000000c</name>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-645802615</nova:name>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:52:40</nova:creationTime>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <nova:user uuid="d3612e26aca645d895f083e0d58dfd69">tempest-VolumesSnapshotTestJSON-541584434-project-member</nova:user>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <nova:project uuid="f5ce1f57546045d891de80fbaff2512b">tempest-VolumesSnapshotTestJSON-541584434</nova:project>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <nova:port uuid="bfb7f68c-f1da-410f-b21f-2f029c653727">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <entry name="serial">983eb240-9938-4bbf-aafb-2562f4738906</entry>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <entry name="uuid">983eb240-9938-4bbf-aafb-2562f4738906</entry>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/983eb240-9938-4bbf-aafb-2562f4738906_disk">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/983eb240-9938-4bbf-aafb-2562f4738906_disk.config">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:30:ec:a1"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <target dev="tapbfb7f68c-f1"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906/console.log" append="off"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:52:41 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:52:41 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:52:41 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:52:41 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.760 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Preparing to wait for external event network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.760 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.761 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.761 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.763 239942 DEBUG nova.virt.libvirt.vif [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:52:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-645802615',display_name='tempest-VolumesSnapshotTestJSON-instance-645802615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-645802615',id=12,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTPMmmOrA1f2jVTyA3tGLGMjAySp4aZ+VdZ8fjet3RqKBb0/kjoG0doqPFnesR+EfLEOfN+cvcwJJGpcSru7QxHSjki1L2h/tvVtt9benX3uAbjfsIDU2hfLwoHUyWsJg==',key_name='tempest-keypair-1786148444',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5ce1f57546045d891de80fbaff2512b',ramdisk_id='',reservation_id='r-pydrtovt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-541584434',owner_user_name='tempest-VolumesSnapshotTestJSON-541584434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:52:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3612e26aca645d895f083e0d58dfd69',uuid=983eb240-9938-4bbf-aafb-2562f4738906,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.763 239942 DEBUG nova.network.os_vif_util [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converting VIF {"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.764 239942 DEBUG nova.network.os_vif_util [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ec:a1,bridge_name='br-int',has_traffic_filtering=True,id=bfb7f68c-f1da-410f-b21f-2f029c653727,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfb7f68c-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.765 239942 DEBUG os_vif [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ec:a1,bridge_name='br-int',has_traffic_filtering=True,id=bfb7f68c-f1da-410f-b21f-2f029c653727,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfb7f68c-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.766 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.766 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.767 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.770 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.770 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfb7f68c-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.771 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfb7f68c-f1, col_values=(('external_ids', {'iface-id': 'bfb7f68c-f1da-410f-b21f-2f029c653727', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:ec:a1', 'vm-uuid': '983eb240-9938-4bbf-aafb-2562f4738906'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.772 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:41 np0005603435 NetworkManager[49097]: <info>  [1769835161.7737] manager: (tapbfb7f68c-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.775 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.778 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.779 239942 INFO os_vif [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ec:a1,bridge_name='br-int',has_traffic_filtering=True,id=bfb7f68c-f1da-410f-b21f-2f029c653727,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfb7f68c-f1')#033[00m
Jan 30 23:52:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:52:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/807454288' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:52:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:52:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/807454288' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.904 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.905 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.905 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No VIF found with MAC fa:16:3e:30:ec:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.906 239942 INFO nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Using config drive#033[00m
Jan 30 23:52:41 np0005603435 nova_compute[239938]: 2026-01-31 04:52:41.937 239942 DEBUG nova.storage.rbd_utils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 983eb240-9938-4bbf-aafb-2562f4738906_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:52:42 np0005603435 nova_compute[239938]: 2026-01-31 04:52:42.629 239942 INFO nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Creating config drive at /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906/disk.config#033[00m
Jan 30 23:52:42 np0005603435 nova_compute[239938]: 2026-01-31 04:52:42.636 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgo4xmsf1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:42 np0005603435 nova_compute[239938]: 2026-01-31 04:52:42.766 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgo4xmsf1" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:42 np0005603435 nova_compute[239938]: 2026-01-31 04:52:42.823 239942 DEBUG nova.storage.rbd_utils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 983eb240-9938-4bbf-aafb-2562f4738906_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:52:42 np0005603435 nova_compute[239938]: 2026-01-31 04:52:42.828 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906/disk.config 983eb240-9938-4bbf-aafb-2562f4738906_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:52:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:42 np0005603435 nova_compute[239938]: 2026-01-31 04:52:42.983 239942 DEBUG oslo_concurrency.processutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906/disk.config 983eb240-9938-4bbf-aafb-2562f4738906_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:52:42 np0005603435 nova_compute[239938]: 2026-01-31 04:52:42.984 239942 INFO nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Deleting local config drive /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906/disk.config because it was imported into RBD.#033[00m
Jan 30 23:52:43 np0005603435 kernel: tapbfb7f68c-f1: entered promiscuous mode
Jan 30 23:52:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:43Z|00123|binding|INFO|Claiming lport bfb7f68c-f1da-410f-b21f-2f029c653727 for this chassis.
Jan 30 23:52:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:43Z|00124|binding|INFO|bfb7f68c-f1da-410f-b21f-2f029c653727: Claiming fa:16:3e:30:ec:a1 10.100.0.3
Jan 30 23:52:43 np0005603435 NetworkManager[49097]: <info>  [1769835163.0417] manager: (tapbfb7f68c-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.040 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.049 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.051 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.063 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ec:a1 10.100.0.3'], port_security=['fa:16:3e:30:ec:a1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '983eb240-9938-4bbf-aafb-2562f4738906', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5ce1f57546045d891de80fbaff2512b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c96dd04a-6a2a-42a9-8341-daa2f64b40ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa479721-2329-4784-af95-25b103421212, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=bfb7f68c-f1da-410f-b21f-2f029c653727) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.065 156017 INFO neutron.agent.ovn.metadata.agent [-] Port bfb7f68c-f1da-410f-b21f-2f029c653727 in datapath 45b5ded5-5fe4-488c-aa97-cad6ca9b361e bound to our chassis#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.067 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45b5ded5-5fe4-488c-aa97-cad6ca9b361e#033[00m
Jan 30 23:52:43 np0005603435 systemd-machined[208030]: New machine qemu-12-instance-0000000c.
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.080 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[39ed4e9c-c8ea-46aa-871a-930098f2c907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.081 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45b5ded5-51 in ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.084 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45b5ded5-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.085 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6e383bea-c921-4a29-8281-de7ed33a6ee9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.086 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed33e75-cab4-4e63-91a2-25cbbf1b37c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.097 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[62f441dd-95da-46d8-a03a-7d77a3f0a1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.102 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:43Z|00125|binding|INFO|Setting lport bfb7f68c-f1da-410f-b21f-2f029c653727 ovn-installed in OVS
Jan 30 23:52:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:43Z|00126|binding|INFO|Setting lport bfb7f68c-f1da-410f-b21f-2f029c653727 up in Southbound
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.112 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 systemd-udevd[260171]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.124 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3c77ee81-41d5-4e14-8afa-91eb3c32ebf9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 NetworkManager[49097]: <info>  [1769835163.1324] device (tapbfb7f68c-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:52:43 np0005603435 NetworkManager[49097]: <info>  [1769835163.1339] device (tapbfb7f68c-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.156 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[e889d048-11e9-478c-8986-23b05b5f513f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 systemd-udevd[260174]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:52:43 np0005603435 NetworkManager[49097]: <info>  [1769835163.1648] manager: (tap45b5ded5-50): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.163 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[22c8fd76-15bd-4bb8-aa76-3416a2712d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.194 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[e33bb1d2-5a30-403f-a508-f809b9086a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.198 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8c9ef2-d2f3-4144-aec3-a7544641690c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 NetworkManager[49097]: <info>  [1769835163.2239] device (tap45b5ded5-50): carrier: link connected
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.228 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[2607a055-193b-4679-8796-e001dbff12cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.245 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f5edc898-da30-405f-a88c-e66e370a3ad9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45b5ded5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:6d:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421762, 'reachable_time': 25294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260201, 'error': None, 'target': 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.264 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa25b13-47b2-4849-ad46-254ebfd038fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:6d7b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421762, 'tstamp': 421762}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260202, 'error': None, 'target': 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.280 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c3eeac0a-88e0-45aa-a480-7356bccc1640]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45b5ded5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:6d:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421762, 'reachable_time': 25294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260203, 'error': None, 'target': 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.309 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[661b3389-2e22-4074-b495-10981eed16f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 180 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.5 MiB/s wr, 86 op/s
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.368 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c203883e-e695-49d0-8528-a5fa92f0fc10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.369 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45b5ded5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.370 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.370 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45b5ded5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:43 np0005603435 kernel: tap45b5ded5-50: entered promiscuous mode
Jan 30 23:52:43 np0005603435 NetworkManager[49097]: <info>  [1769835163.3879] manager: (tap45b5ded5-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.388 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.389 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.391 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45b5ded5-50, col_values=(('external_ids', {'iface-id': '3f9b28f1-1e76-45d9-9277-3ccd8b8d89cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.392 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:43Z|00127|binding|INFO|Releasing lport 3f9b28f1-1e76-45d9-9277-3ccd8b8d89cf from this chassis (sb_readonly=0)
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.393 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.393 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45b5ded5-5fe4-488c-aa97-cad6ca9b361e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45b5ded5-5fe4-488c-aa97-cad6ca9b361e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.394 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2e21a72a-25e8-4cc0-9961-a9c7b31d8c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.395 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-45b5ded5-5fe4-488c-aa97-cad6ca9b361e
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/45b5ded5-5fe4-488c-aa97-cad6ca9b361e.pid.haproxy
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 45b5ded5-5fe4-488c-aa97-cad6ca9b361e
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:52:43 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:43.396 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'env', 'PROCESS_TAG=haproxy-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45b5ded5-5fe4-488c-aa97-cad6ca9b361e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.400 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.542 239942 DEBUG nova.compute.manager [req-1b95f14e-e339-488a-b211-bfb7a4198501 req-c95eed13-df4e-4a41-95c6-757f14acdd7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received event network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.543 239942 DEBUG oslo_concurrency.lockutils [req-1b95f14e-e339-488a-b211-bfb7a4198501 req-c95eed13-df4e-4a41-95c6-757f14acdd7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.544 239942 DEBUG oslo_concurrency.lockutils [req-1b95f14e-e339-488a-b211-bfb7a4198501 req-c95eed13-df4e-4a41-95c6-757f14acdd7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.544 239942 DEBUG oslo_concurrency.lockutils [req-1b95f14e-e339-488a-b211-bfb7a4198501 req-c95eed13-df4e-4a41-95c6-757f14acdd7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:43 np0005603435 nova_compute[239938]: 2026-01-31 04:52:43.545 239942 DEBUG nova.compute.manager [req-1b95f14e-e339-488a-b211-bfb7a4198501 req-c95eed13-df4e-4a41-95c6-757f14acdd7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Processing event network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:52:43 np0005603435 podman[260235]: 2026-01-31 04:52:43.68012842 +0000 UTC m=+0.040920735 container create 49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:52:43 np0005603435 systemd[1]: Started libpod-conmon-49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9.scope.
Jan 30 23:52:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:52:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a369827878f3a4c05648bb1ffb50d828f611bb67575da9a7a16ac00d3b0e0966/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:52:43 np0005603435 podman[260235]: 2026-01-31 04:52:43.659691979 +0000 UTC m=+0.020484304 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:52:43 np0005603435 podman[260235]: 2026-01-31 04:52:43.759816027 +0000 UTC m=+0.120608352 container init 49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 30 23:52:43 np0005603435 podman[260235]: 2026-01-31 04:52:43.764352348 +0000 UTC m=+0.125144673 container start 49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:52:43 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[260250]: [NOTICE]   (260254) : New worker (260256) forked
Jan 30 23:52:43 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[260250]: [NOTICE]   (260254) : Loading success.
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.117 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835164.1164339, 983eb240-9938-4bbf-aafb-2562f4738906 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.117 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] VM Started (Lifecycle Event)#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.120 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.125 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.129 239942 INFO nova.virt.libvirt.driver [-] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Instance spawned successfully.#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.129 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.137 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.140 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.152 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.153 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.153 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.154 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.154 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.154 239942 DEBUG nova.virt.libvirt.driver [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.162 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.162 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835164.1180744, 983eb240-9938-4bbf-aafb-2562f4738906 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.162 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.193 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.195 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835164.1242943, 983eb240-9938-4bbf-aafb-2562f4738906 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.195 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.223 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.225 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.245 239942 INFO nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Took 9.27 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.246 239942 DEBUG nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.258 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.321 239942 INFO nova.compute.manager [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Took 10.46 seconds to build instance.#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.340 239942 DEBUG oslo_concurrency.lockutils [None req-d7c347f5-7b54-4eb5-98c0-5000996894f7 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:44 np0005603435 nova_compute[239938]: 2026-01-31 04:52:44.763 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 180 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 814 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 30 23:52:45 np0005603435 nova_compute[239938]: 2026-01-31 04:52:45.679 239942 DEBUG nova.compute.manager [req-79fa202e-3dc9-4fb7-b989-66ad2e8ba283 req-dd6dfadc-50b8-4b92-8fb1-2681cea38f75 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received event network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:45 np0005603435 nova_compute[239938]: 2026-01-31 04:52:45.679 239942 DEBUG oslo_concurrency.lockutils [req-79fa202e-3dc9-4fb7-b989-66ad2e8ba283 req-dd6dfadc-50b8-4b92-8fb1-2681cea38f75 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:45 np0005603435 nova_compute[239938]: 2026-01-31 04:52:45.680 239942 DEBUG oslo_concurrency.lockutils [req-79fa202e-3dc9-4fb7-b989-66ad2e8ba283 req-dd6dfadc-50b8-4b92-8fb1-2681cea38f75 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:45 np0005603435 nova_compute[239938]: 2026-01-31 04:52:45.680 239942 DEBUG oslo_concurrency.lockutils [req-79fa202e-3dc9-4fb7-b989-66ad2e8ba283 req-dd6dfadc-50b8-4b92-8fb1-2681cea38f75 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:45 np0005603435 nova_compute[239938]: 2026-01-31 04:52:45.680 239942 DEBUG nova.compute.manager [req-79fa202e-3dc9-4fb7-b989-66ad2e8ba283 req-dd6dfadc-50b8-4b92-8fb1-2681cea38f75 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] No waiting events found dispatching network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:52:45 np0005603435 nova_compute[239938]: 2026-01-31 04:52:45.681 239942 WARNING nova.compute.manager [req-79fa202e-3dc9-4fb7-b989-66ad2e8ba283 req-dd6dfadc-50b8-4b92-8fb1-2681cea38f75 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received unexpected event network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:52:46 np0005603435 NetworkManager[49097]: <info>  [1769835166.2600] manager: (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 30 23:52:46 np0005603435 NetworkManager[49097]: <info>  [1769835166.2613] manager: (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 30 23:52:46 np0005603435 nova_compute[239938]: 2026-01-31 04:52:46.258 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:46 np0005603435 nova_compute[239938]: 2026-01-31 04:52:46.304 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:46 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:46Z|00128|binding|INFO|Releasing lport 3f9b28f1-1e76-45d9-9277-3ccd8b8d89cf from this chassis (sb_readonly=0)
Jan 30 23:52:46 np0005603435 nova_compute[239938]: 2026-01-31 04:52:46.315 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:46 np0005603435 nova_compute[239938]: 2026-01-31 04:52:46.773 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 180 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 30 23:52:47 np0005603435 nova_compute[239938]: 2026-01-31 04:52:47.836 239942 DEBUG nova.compute.manager [req-f645a356-a4a0-44ea-b29a-dfd47b407173 req-a4a94b35-f302-4e06-8b8c-a47d2f421434 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received event network-changed-bfb7f68c-f1da-410f-b21f-2f029c653727 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:52:47 np0005603435 nova_compute[239938]: 2026-01-31 04:52:47.837 239942 DEBUG nova.compute.manager [req-f645a356-a4a0-44ea-b29a-dfd47b407173 req-a4a94b35-f302-4e06-8b8c-a47d2f421434 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Refreshing instance network info cache due to event network-changed-bfb7f68c-f1da-410f-b21f-2f029c653727. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:52:47 np0005603435 nova_compute[239938]: 2026-01-31 04:52:47.837 239942 DEBUG oslo_concurrency.lockutils [req-f645a356-a4a0-44ea-b29a-dfd47b407173 req-a4a94b35-f302-4e06-8b8c-a47d2f421434 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:52:47 np0005603435 nova_compute[239938]: 2026-01-31 04:52:47.837 239942 DEBUG oslo_concurrency.lockutils [req-f645a356-a4a0-44ea-b29a-dfd47b407173 req-a4a94b35-f302-4e06-8b8c-a47d2f421434 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:52:47 np0005603435 nova_compute[239938]: 2026-01-31 04:52:47.837 239942 DEBUG nova.network.neutron [req-f645a356-a4a0-44ea-b29a-dfd47b407173 req-a4a94b35-f302-4e06-8b8c-a47d2f421434 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Refreshing network info cache for port bfb7f68c-f1da-410f-b21f-2f029c653727 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:52:47 np0005603435 nova_compute[239938]: 2026-01-31 04:52:47.954 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 180 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 723 KiB/s wr, 100 op/s
Jan 30 23:52:49 np0005603435 nova_compute[239938]: 2026-01-31 04:52:49.642 239942 DEBUG nova.network.neutron [req-f645a356-a4a0-44ea-b29a-dfd47b407173 req-a4a94b35-f302-4e06-8b8c-a47d2f421434 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Updated VIF entry in instance network info cache for port bfb7f68c-f1da-410f-b21f-2f029c653727. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:52:49 np0005603435 nova_compute[239938]: 2026-01-31 04:52:49.643 239942 DEBUG nova.network.neutron [req-f645a356-a4a0-44ea-b29a-dfd47b407173 req-a4a94b35-f302-4e06-8b8c-a47d2f421434 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Updating instance_info_cache with network_info: [{"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:52:49 np0005603435 nova_compute[239938]: 2026-01-31 04:52:49.677 239942 DEBUG oslo_concurrency.lockutils [req-f645a356-a4a0-44ea-b29a-dfd47b407173 req-a4a94b35-f302-4e06-8b8c-a47d2f421434 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-983eb240-9938-4bbf-aafb-2562f4738906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:52:49 np0005603435 nova_compute[239938]: 2026-01-31 04:52:49.766 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 180 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 723 KiB/s wr, 100 op/s
Jan 30 23:52:51 np0005603435 nova_compute[239938]: 2026-01-31 04:52:51.775 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 180 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 98 op/s
Jan 30 23:52:53 np0005603435 nova_compute[239938]: 2026-01-31 04:52:53.636 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:54 np0005603435 nova_compute[239938]: 2026-01-31 04:52:54.810 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:55 np0005603435 podman[260308]: 2026-01-31 04:52:55.087972185 +0000 UTC m=+0.053782111 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 30 23:52:55 np0005603435 podman[260309]: 2026-01-31 04:52:55.121003616 +0000 UTC m=+0.083256865 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 30 23:52:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 188 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 803 KiB/s wr, 86 op/s
Jan 30 23:52:55 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:55Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:ec:a1 10.100.0.3
Jan 30 23:52:55 np0005603435 ovn_controller[145670]: 2026-01-31T04:52:55Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:ec:a1 10.100.0.3
Jan 30 23:52:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:55.918 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:52:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:55.919 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:52:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:52:55.920 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:52:56 np0005603435 nova_compute[239938]: 2026-01-31 04:52:56.777 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 211 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Jan 30 23:52:57 np0005603435 nova_compute[239938]: 2026-01-31 04:52:57.653 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:52:58 np0005603435 nova_compute[239938]: 2026-01-31 04:52:58.861 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:52:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 211 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 30 23:52:59 np0005603435 nova_compute[239938]: 2026-01-31 04:52:59.814 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:00 np0005603435 nova_compute[239938]: 2026-01-31 04:53:00.465 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 213 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 30 23:53:01 np0005603435 nova_compute[239938]: 2026-01-31 04:53:01.780 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.034 239942 DEBUG oslo_concurrency.lockutils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.035 239942 DEBUG oslo_concurrency.lockutils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.059 239942 DEBUG nova.objects.instance [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'flavor' on Instance uuid 983eb240-9938-4bbf-aafb-2562f4738906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.083 239942 INFO nova.virt.libvirt.driver [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Ignoring supplied device name: /dev/vdb#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.102 239942 DEBUG oslo_concurrency.lockutils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 213 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.391 239942 DEBUG oslo_concurrency.lockutils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.392 239942 DEBUG oslo_concurrency.lockutils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.392 239942 INFO nova.compute.manager [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Attaching volume 46420b4f-4b4c-44fa-bf8f-a94c2ef40188 to /dev/vdb#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.540 239942 DEBUG os_brick.utils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.542 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.552 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.553 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[730c7588-fa0e-4be8-a1d0-68ee4c78e7f3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.554 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.564 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.565 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[8f67c416-2f51-48d4-9cac-21fa28f4c00d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.567 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.573 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.574 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[c1344b56-058f-4797-9e00-3deeecc5a262]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.575 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea7892b-fe6e-47e5-b1f4-57fff8714df3]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.575 239942 DEBUG oslo_concurrency.processutils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.590 239942 DEBUG oslo_concurrency.processutils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.592 239942 DEBUG os_brick.initiator.connectors.lightos [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.593 239942 DEBUG os_brick.initiator.connectors.lightos [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.593 239942 DEBUG os_brick.initiator.connectors.lightos [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.593 239942 DEBUG os_brick.utils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:53:03 np0005603435 nova_compute[239938]: 2026-01-31 04:53:03.594 239942 DEBUG nova.virt.block_device [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Updating existing volume attachment record: 70fc291c-fc1f-4d45-a625-9e7dd3716939 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:53:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1864808516' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.587 239942 DEBUG nova.objects.instance [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'flavor' on Instance uuid 983eb240-9938-4bbf-aafb-2562f4738906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.611 239942 DEBUG nova.virt.libvirt.driver [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Attempting to attach volume 46420b4f-4b4c-44fa-bf8f-a94c2ef40188 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.614 239942 DEBUG nova.virt.libvirt.guest [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:53:04 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:53:04 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-46420b4f-4b4c-44fa-bf8f-a94c2ef40188">
Jan 30 23:53:04 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:04 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:53:04 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:53:04 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:04 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:53:04 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:53:04 np0005603435 nova_compute[239938]:  <serial>46420b4f-4b4c-44fa-bf8f-a94c2ef40188</serial>
Jan 30 23:53:04 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:53:04 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.769 239942 DEBUG nova.virt.libvirt.driver [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.769 239942 DEBUG nova.virt.libvirt.driver [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.770 239942 DEBUG nova.virt.libvirt.driver [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.770 239942 DEBUG nova.virt.libvirt.driver [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No VIF found with MAC fa:16:3e:30:ec:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.849 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:04 np0005603435 nova_compute[239938]: 2026-01-31 04:53:04.969 239942 DEBUG oslo_concurrency.lockutils [None req-159fd322-261a-4d10-a93d-3f0a5d631bad d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 213 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:53:06
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta', 'backups', '.rgw.root']
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:53:06 np0005603435 nova_compute[239938]: 2026-01-31 04:53:06.559 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Jan 30 23:53:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Jan 30 23:53:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Jan 30 23:53:06 np0005603435 nova_compute[239938]: 2026-01-31 04:53:06.782 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:53:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:53:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 214 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 94 KiB/s wr, 18 op/s
Jan 30 23:53:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Jan 30 23:53:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Jan 30 23:53:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Jan 30 23:53:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:53:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:53:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:53:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:53:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:53:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:53:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:53:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:53:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:53:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:53:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 214 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 52 KiB/s wr, 21 op/s
Jan 30 23:53:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Jan 30 23:53:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Jan 30 23:53:09 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Jan 30 23:53:09 np0005603435 nova_compute[239938]: 2026-01-31 04:53:09.849 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:10 np0005603435 nova_compute[239938]: 2026-01-31 04:53:10.440 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 214 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 50 KiB/s wr, 34 op/s
Jan 30 23:53:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Jan 30 23:53:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Jan 30 23:53:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Jan 30 23:53:11 np0005603435 nova_compute[239938]: 2026-01-31 04:53:11.814 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:12 np0005603435 nova_compute[239938]: 2026-01-31 04:53:12.213 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "9a44a647-eae8-41f0-b96c-aa172ac4757a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:12 np0005603435 nova_compute[239938]: 2026-01-31 04:53:12.214 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:12 np0005603435 nova_compute[239938]: 2026-01-31 04:53:12.231 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:53:12 np0005603435 nova_compute[239938]: 2026-01-31 04:53:12.301 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:12 np0005603435 nova_compute[239938]: 2026-01-31 04:53:12.302 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:12 np0005603435 nova_compute[239938]: 2026-01-31 04:53:12.312 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:53:12 np0005603435 nova_compute[239938]: 2026-01-31 04:53:12.313 239942 INFO nova.compute.claims [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:53:12 np0005603435 nova_compute[239938]: 2026-01-31 04:53:12.555 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Jan 30 23:53:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Jan 30 23:53:12 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Jan 30 23:53:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787708343' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.141 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.147 239942 DEBUG nova.compute.provider_tree [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.171 239942 DEBUG nova.scheduler.client.report [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.209 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.209 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.294 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.295 239942 DEBUG nova.network.neutron [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.320 239942 INFO nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.348 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:53:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 214 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 6.3 KiB/s wr, 61 op/s
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.408 239942 INFO nova.virt.block_device [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Booting with volume 2a71f72b-441e-41ec-8f18-b0cd91792390 at /dev/vda#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.539 239942 DEBUG os_brick.utils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.541 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.545 239942 DEBUG nova.policy [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e10f13b98624406985dec6a5dcc391c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.553 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.553 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc1d291-5219-43b9-be7c-e74f5c835c58]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.554 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.599 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.600 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[75e0a9d0-1fa0-4f04-af4d-76f588d5e9bc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.601 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.607 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.608 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaca2ba-94c9-419c-8f09-37f10c7cee14]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.608 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[86f32eba-339b-4c0d-8828-b0daf5330b84]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.609 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.626 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.628 239942 DEBUG os_brick.initiator.connectors.lightos [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.628 239942 DEBUG os_brick.initiator.connectors.lightos [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.628 239942 DEBUG os_brick.initiator.connectors.lightos [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.629 239942 DEBUG os_brick.utils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:53:13 np0005603435 nova_compute[239938]: 2026-01-31 04:53:13.631 239942 DEBUG nova.virt.block_device [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Updating existing volume attachment record: 7a078f38-07e8-4c22-a469-2b06d92fcb4b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:53:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2913640296' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Jan 30 23:53:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Jan 30 23:53:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.851 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.891 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.894 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.895 239942 INFO nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Creating image(s)#033[00m
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.895 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.896 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Ensure instance console log exists: /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.896 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.897 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:14 np0005603435 nova_compute[239938]: 2026-01-31 04:53:14.897 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:15 np0005603435 nova_compute[239938]: 2026-01-31 04:53:15.057 239942 DEBUG nova.network.neutron [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Successfully created port: 556173b1-34bc-4edc-b3fa-7a144df8b331 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:53:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 214 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 8.7 KiB/s wr, 66 op/s
Jan 30 23:53:15 np0005603435 nova_compute[239938]: 2026-01-31 04:53:15.780 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:15 np0005603435 nova_compute[239938]: 2026-01-31 04:53:15.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.076 239942 DEBUG nova.network.neutron [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Successfully updated port: 556173b1-34bc-4edc-b3fa-7a144df8b331 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.088 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:16 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:16.088 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:16 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:16.090 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.096 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "refresh_cache-9a44a647-eae8-41f0-b96c-aa172ac4757a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.096 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquired lock "refresh_cache-9a44a647-eae8-41f0-b96c-aa172ac4757a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.097 239942 DEBUG nova.network.neutron [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.222 239942 DEBUG nova.compute.manager [req-b7af800a-2a18-4617-b4b2-f2c5ea061372 req-d8bc76f7-6385-45ae-b9a1-1aad1ca9a352 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received event network-changed-556173b1-34bc-4edc-b3fa-7a144df8b331 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.222 239942 DEBUG nova.compute.manager [req-b7af800a-2a18-4617-b4b2-f2c5ea061372 req-d8bc76f7-6385-45ae-b9a1-1aad1ca9a352 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Refreshing instance network info cache due to event network-changed-556173b1-34bc-4edc-b3fa-7a144df8b331. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.223 239942 DEBUG oslo_concurrency.lockutils [req-b7af800a-2a18-4617-b4b2-f2c5ea061372 req-d8bc76f7-6385-45ae-b9a1-1aad1ca9a352 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-9a44a647-eae8-41f0-b96c-aa172ac4757a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.252 239942 DEBUG nova.network.neutron [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.653 239942 DEBUG oslo_concurrency.lockutils [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.653 239942 DEBUG oslo_concurrency.lockutils [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.670 239942 INFO nova.compute.manager [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Detaching volume 46420b4f-4b4c-44fa-bf8f-a94c2ef40188#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.812 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.817 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.860 239942 INFO nova.virt.block_device [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Attempting to driver detach volume 46420b4f-4b4c-44fa-bf8f-a94c2ef40188 from mountpoint /dev/vdb#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.871 239942 DEBUG nova.virt.libvirt.driver [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Attempting to detach device vdb from instance 983eb240-9938-4bbf-aafb-2562f4738906 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.872 239942 DEBUG nova.virt.libvirt.guest [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-46420b4f-4b4c-44fa-bf8f-a94c2ef40188">
Jan 30 23:53:16 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <serial>46420b4f-4b4c-44fa-bf8f-a94c2ef40188</serial>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:53:16 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:53:16 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.882 239942 INFO nova.virt.libvirt.driver [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Successfully detached device vdb from instance 983eb240-9938-4bbf-aafb-2562f4738906 from the persistent domain config.#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.882 239942 DEBUG nova.virt.libvirt.driver [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 983eb240-9938-4bbf-aafb-2562f4738906 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.883 239942 DEBUG nova.virt.libvirt.guest [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-46420b4f-4b4c-44fa-bf8f-a94c2ef40188">
Jan 30 23:53:16 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <serial>46420b4f-4b4c-44fa-bf8f-a94c2ef40188</serial>
Jan 30 23:53:16 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:53:16 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:53:16 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.886 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.986 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835196.9865017, 983eb240-9938-4bbf-aafb-2562f4738906 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.989 239942 DEBUG nova.virt.libvirt.driver [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 983eb240-9938-4bbf-aafb-2562f4738906 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:53:16 np0005603435 nova_compute[239938]: 2026-01-31 04:53:16.991 239942 INFO nova.virt.libvirt.driver [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Successfully detached device vdb from instance 983eb240-9938-4bbf-aafb-2562f4738906 from the live domain config.#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.128 239942 DEBUG nova.network.neutron [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Updating instance_info_cache with network_info: [{"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.145 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Releasing lock "refresh_cache-9a44a647-eae8-41f0-b96c-aa172ac4757a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.146 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Instance network_info: |[{"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.146 239942 DEBUG oslo_concurrency.lockutils [req-b7af800a-2a18-4617-b4b2-f2c5ea061372 req-d8bc76f7-6385-45ae-b9a1-1aad1ca9a352 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-9a44a647-eae8-41f0-b96c-aa172ac4757a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.147 239942 DEBUG nova.network.neutron [req-b7af800a-2a18-4617-b4b2-f2c5ea061372 req-d8bc76f7-6385-45ae-b9a1-1aad1ca9a352 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Refreshing network info cache for port 556173b1-34bc-4edc-b3fa-7a144df8b331 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.152 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Start _get_guest_xml network_info=[{"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '7a078f38-07e8-4c22-a469-2b06d92fcb4b', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2a71f72b-441e-41ec-8f18-b0cd91792390', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2a71f72b-441e-41ec-8f18-b0cd91792390', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9a44a647-eae8-41f0-b96c-aa172ac4757a', 'attached_at': '', 'detached_at': '', 'volume_id': '2a71f72b-441e-41ec-8f18-b0cd91792390', 'serial': '2a71f72b-441e-41ec-8f18-b0cd91792390'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.159 239942 WARNING nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.164 239942 DEBUG nova.virt.libvirt.host [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.164 239942 DEBUG nova.virt.libvirt.host [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.171 239942 DEBUG nova.objects.instance [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'flavor' on Instance uuid 983eb240-9938-4bbf-aafb-2562f4738906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.177 239942 DEBUG nova.virt.libvirt.host [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.177 239942 DEBUG nova.virt.libvirt.host [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.178 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.178 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.179 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.179 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.179 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.179 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.180 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.180 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.180 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.180 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.181 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.181 239942 DEBUG nova.virt.hardware [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.207 239942 DEBUG nova.storage.rbd_utils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 9a44a647-eae8-41f0-b96c-aa172ac4757a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.211 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007647883653749423 of space, bias 1.0, pg target 0.2294365096124827 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007055188448717719 of space, bias 1.0, pg target 0.21165565346153156 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 5.929181451108793e-07 of space, bias 1.0, pg target 0.0001778754435332638 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665550048613507 of space, bias 1.0, pg target 0.1999665014584052 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.903535167616839e-07 of space, bias 4.0, pg target 0.0008284242201140207 quantized to 16 (current 16)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.249 239942 DEBUG oslo_concurrency.lockutils [None req-9387efc2-ae80-4dcd-8c0b-a6228986ed26 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.255 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.256 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.257 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.257 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.260 239942 INFO nova.compute.manager [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Terminating instance#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.262 239942 DEBUG nova.compute.manager [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:53:17 np0005603435 kernel: tapbfb7f68c-f1 (unregistering): left promiscuous mode
Jan 30 23:53:17 np0005603435 NetworkManager[49097]: <info>  [1769835197.3156] device (tapbfb7f68c-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.320 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:17Z|00129|binding|INFO|Releasing lport bfb7f68c-f1da-410f-b21f-2f029c653727 from this chassis (sb_readonly=0)
Jan 30 23:53:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:17Z|00130|binding|INFO|Setting lport bfb7f68c-f1da-410f-b21f-2f029c653727 down in Southbound
Jan 30 23:53:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:17Z|00131|binding|INFO|Removing iface tapbfb7f68c-f1 ovn-installed in OVS
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.336 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.337 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ec:a1 10.100.0.3'], port_security=['fa:16:3e:30:ec:a1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '983eb240-9938-4bbf-aafb-2562f4738906', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5ce1f57546045d891de80fbaff2512b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c96dd04a-6a2a-42a9-8341-daa2f64b40ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa479721-2329-4784-af95-25b103421212, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=bfb7f68c-f1da-410f-b21f-2f029c653727) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.339 156017 INFO neutron.agent.ovn.metadata.agent [-] Port bfb7f68c-f1da-410f-b21f-2f029c653727 in datapath 45b5ded5-5fe4-488c-aa97-cad6ca9b361e unbound from our chassis#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.343 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45b5ded5-5fe4-488c-aa97-cad6ca9b361e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.345 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[26d2d684-0c38-4a3b-a574-dc1ea5a43ac6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.346 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e namespace which is not needed anymore#033[00m
Jan 30 23:53:17 np0005603435 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 30 23:53:17 np0005603435 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.331s CPU time.
Jan 30 23:53:17 np0005603435 systemd-machined[208030]: Machine qemu-12-instance-0000000c terminated.
Jan 30 23:53:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 214 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 6.7 KiB/s wr, 93 op/s
Jan 30 23:53:17 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[260250]: [NOTICE]   (260254) : haproxy version is 2.8.14-c23fe91
Jan 30 23:53:17 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[260250]: [NOTICE]   (260254) : path to executable is /usr/sbin/haproxy
Jan 30 23:53:17 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[260250]: [WARNING]  (260254) : Exiting Master process...
Jan 30 23:53:17 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[260250]: [WARNING]  (260254) : Exiting Master process...
Jan 30 23:53:17 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[260250]: [ALERT]    (260254) : Current worker (260256) exited with code 143 (Terminated)
Jan 30 23:53:17 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[260250]: [WARNING]  (260254) : All workers exited. Exiting... (0)
Jan 30 23:53:17 np0005603435 systemd[1]: libpod-49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9.scope: Deactivated successfully.
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.502 239942 INFO nova.virt.libvirt.driver [-] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Instance destroyed successfully.#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.503 239942 DEBUG nova.objects.instance [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'resources' on Instance uuid 983eb240-9938-4bbf-aafb-2562f4738906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:17 np0005603435 podman[260471]: 2026-01-31 04:53:17.505020835 +0000 UTC m=+0.060351562 container died 49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.522 239942 DEBUG nova.virt.libvirt.vif [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:52:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-645802615',display_name='tempest-VolumesSnapshotTestJSON-instance-645802615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-645802615',id=12,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTPMmmOrA1f2jVTyA3tGLGMjAySp4aZ+VdZ8fjet3RqKBb0/kjoG0doqPFnesR+EfLEOfN+cvcwJJGpcSru7QxHSjki1L2h/tvVtt9benX3uAbjfsIDU2hfLwoHUyWsJg==',key_name='tempest-keypair-1786148444',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:52:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5ce1f57546045d891de80fbaff2512b',ramdisk_id='',reservation_id='r-pydrtovt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-541584434',owner_user_name='tempest-VolumesSnapshotTestJSON-541584434-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:52:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3612e26aca645d895f083e0d58dfd69',uuid=983eb240-9938-4bbf-aafb-2562f4738906,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.523 239942 DEBUG nova.network.os_vif_util [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converting VIF {"id": "bfb7f68c-f1da-410f-b21f-2f029c653727", "address": "fa:16:3e:30:ec:a1", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfb7f68c-f1", "ovs_interfaceid": "bfb7f68c-f1da-410f-b21f-2f029c653727", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.524 239942 DEBUG nova.network.os_vif_util [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:ec:a1,bridge_name='br-int',has_traffic_filtering=True,id=bfb7f68c-f1da-410f-b21f-2f029c653727,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfb7f68c-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.524 239942 DEBUG os_vif [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:ec:a1,bridge_name='br-int',has_traffic_filtering=True,id=bfb7f68c-f1da-410f-b21f-2f029c653727,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfb7f68c-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.529 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.529 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfb7f68c-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.531 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.532 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9-userdata-shm.mount: Deactivated successfully.
Jan 30 23:53:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a369827878f3a4c05648bb1ffb50d828f611bb67575da9a7a16ac00d3b0e0966-merged.mount: Deactivated successfully.
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.541 239942 INFO os_vif [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:ec:a1,bridge_name='br-int',has_traffic_filtering=True,id=bfb7f68c-f1da-410f-b21f-2f029c653727,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfb7f68c-f1')#033[00m
Jan 30 23:53:17 np0005603435 podman[260471]: 2026-01-31 04:53:17.548750949 +0000 UTC m=+0.104081666 container cleanup 49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 30 23:53:17 np0005603435 systemd[1]: libpod-conmon-49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9.scope: Deactivated successfully.
Jan 30 23:53:17 np0005603435 podman[260519]: 2026-01-31 04:53:17.61642169 +0000 UTC m=+0.044839501 container remove 49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.622 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba33e9e-47fc-42af-bda2-301fcb9dabda]: (4, ('Sat Jan 31 04:53:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e (49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9)\n49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9\nSat Jan 31 04:53:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e (49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9)\n49de8e579bb5bbe69319b4b737d5305416ab710559b6dcb4905eb7c7ba17e0c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.624 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c58c6613-70da-4f74-9bc8-0c09fd60752c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.625 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45b5ded5-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:17 np0005603435 kernel: tap45b5ded5-50: left promiscuous mode
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.627 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.639 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dc5b56-4825-44a0-a16f-5171c01bdbea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.641 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.656 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[be4de3e6-9fec-4f68-8c07-81f24d700178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.657 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c95f5af1-fc66-44d9-be87-201669d68328]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.669 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e40e8a98-02e1-4dc5-b781-75dfb153acde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421755, 'reachable_time': 28465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260547, 'error': None, 'target': 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.671 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:53:17 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:17.671 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6fc31f-e78a-4220-9ebc-5c68446777a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:17 np0005603435 systemd[1]: run-netns-ovnmeta\x2d45b5ded5\x2d5fe4\x2d488c\x2daa97\x2dcad6ca9b361e.mount: Deactivated successfully.
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.847 239942 INFO nova.virt.libvirt.driver [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Deleting instance files /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906_del#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.848 239942 INFO nova.virt.libvirt.driver [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Deletion of /var/lib/nova/instances/983eb240-9938-4bbf-aafb-2562f4738906_del complete#033[00m
Jan 30 23:53:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543251393' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.881 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.890 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.916 239942 INFO nova.compute.manager [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.917 239942 DEBUG oslo.service.loopingcall [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.918 239942 DEBUG nova.compute.manager [-] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:53:17 np0005603435 nova_compute[239938]: 2026-01-31 04:53:17.918 239942 DEBUG nova.network.neutron [-] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:53:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Jan 30 23:53:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Jan 30 23:53:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.140 239942 DEBUG os_brick.encryptors [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Using volume encryption metadata '{'encryption_key_id': '9e580e9c-b107-493d-9ca9-da4f341ff26e', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2a71f72b-441e-41ec-8f18-b0cd91792390', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2a71f72b-441e-41ec-8f18-b0cd91792390', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9a44a647-eae8-41f0-b96c-aa172ac4757a', 'attached_at': '', 'detached_at': '', 'volume_id': '2a71f72b-441e-41ec-8f18-b0cd91792390', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.143 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.160 239942 DEBUG barbicanclient.v1.secrets [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.161 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.182 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.182 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.206 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.207 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.230 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.231 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.258 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.259 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.289 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.290 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.308 239942 DEBUG nova.compute.manager [req-79f66eee-ee1e-4546-a2a9-ba6dbd29a6cc req-1de0b5a1-7621-4109-ac7c-9ed32c0b72f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received event network-vif-unplugged-bfb7f68c-f1da-410f-b21f-2f029c653727 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.309 239942 DEBUG oslo_concurrency.lockutils [req-79f66eee-ee1e-4546-a2a9-ba6dbd29a6cc req-1de0b5a1-7621-4109-ac7c-9ed32c0b72f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.310 239942 DEBUG oslo_concurrency.lockutils [req-79f66eee-ee1e-4546-a2a9-ba6dbd29a6cc req-1de0b5a1-7621-4109-ac7c-9ed32c0b72f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.310 239942 DEBUG oslo_concurrency.lockutils [req-79f66eee-ee1e-4546-a2a9-ba6dbd29a6cc req-1de0b5a1-7621-4109-ac7c-9ed32c0b72f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.311 239942 DEBUG nova.compute.manager [req-79f66eee-ee1e-4546-a2a9-ba6dbd29a6cc req-1de0b5a1-7621-4109-ac7c-9ed32c0b72f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] No waiting events found dispatching network-vif-unplugged-bfb7f68c-f1da-410f-b21f-2f029c653727 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.311 239942 DEBUG nova.compute.manager [req-79f66eee-ee1e-4546-a2a9-ba6dbd29a6cc req-1de0b5a1-7621-4109-ac7c-9ed32c0b72f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received event network-vif-unplugged-bfb7f68c-f1da-410f-b21f-2f029c653727 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.320 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.321 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.360 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.361 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.386 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.387 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.407 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.407 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.458 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.458 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.483 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.483 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.511 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.512 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.533 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.534 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.567 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.567 239942 INFO barbicanclient.base [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Calculated Secrets uuid ref: secrets/9e580e9c-b107-493d-9ca9-da4f341ff26e#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.584 239942 DEBUG barbicanclient.client [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.585 239942 DEBUG nova.virt.libvirt.host [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <volume>2a71f72b-441e-41ec-8f18-b0cd91792390</volume>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  </usage>
Jan 30 23:53:18 np0005603435 nova_compute[239938]: </secret>
Jan 30 23:53:18 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.623 239942 DEBUG nova.virt.libvirt.vif [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:53:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-431352823',display_name='tempest-TestVolumeBootPattern-server-431352823',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-431352823',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-o9i7wp7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:53:13Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=9a44a647-eae8-41f0-b96c-aa172ac4757a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.623 239942 DEBUG nova.network.os_vif_util [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.624 239942 DEBUG nova.network.os_vif_util [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:48:2f,bridge_name='br-int',has_traffic_filtering=True,id=556173b1-34bc-4edc-b3fa-7a144df8b331,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap556173b1-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.627 239942 DEBUG nova.objects.instance [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a44a647-eae8-41f0-b96c-aa172ac4757a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.646 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <uuid>9a44a647-eae8-41f0-b96c-aa172ac4757a</uuid>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <name>instance-0000000d</name>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestVolumeBootPattern-server-431352823</nova:name>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:53:17</nova:creationTime>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <nova:user uuid="e10f13b98624406985dec6a5dcc391c7">tempest-TestVolumeBootPattern-1782423025-project-member</nova:user>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <nova:project uuid="e332802dd6cf49c59f8ed38e70addb0e">tempest-TestVolumeBootPattern-1782423025</nova:project>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <nova:port uuid="556173b1-34bc-4edc-b3fa-7a144df8b331">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <entry name="serial">9a44a647-eae8-41f0-b96c-aa172ac4757a</entry>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <entry name="uuid">9a44a647-eae8-41f0-b96c-aa172ac4757a</entry>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/9a44a647-eae8-41f0-b96c-aa172ac4757a_disk.config">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-2a71f72b-441e-41ec-8f18-b0cd91792390">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <serial>2a71f72b-441e-41ec-8f18-b0cd91792390</serial>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <encryption format="luks">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:        <secret type="passphrase" uuid="aac7ed77-0dc7-4315-9818-cb5668bc2ba7"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      </encryption>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:ca:48:2f"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <target dev="tap556173b1-34"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a/console.log" append="off"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:53:18 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:53:18 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:53:18 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:53:18 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.648 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Preparing to wait for external event network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.649 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.649 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.649 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.650 239942 DEBUG nova.virt.libvirt.vif [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:53:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-431352823',display_name='tempest-TestVolumeBootPattern-server-431352823',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-431352823',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-o9i7wp7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:53:13Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=9a44a647-eae8-41f0-b96c-aa172ac4757a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.651 239942 DEBUG nova.network.os_vif_util [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.652 239942 DEBUG nova.network.os_vif_util [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:48:2f,bridge_name='br-int',has_traffic_filtering=True,id=556173b1-34bc-4edc-b3fa-7a144df8b331,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap556173b1-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.653 239942 DEBUG os_vif [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:48:2f,bridge_name='br-int',has_traffic_filtering=True,id=556173b1-34bc-4edc-b3fa-7a144df8b331,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap556173b1-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.654 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.654 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.655 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.658 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.658 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap556173b1-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.659 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap556173b1-34, col_values=(('external_ids', {'iface-id': '556173b1-34bc-4edc-b3fa-7a144df8b331', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:48:2f', 'vm-uuid': '9a44a647-eae8-41f0-b96c-aa172ac4757a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.701 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:18 np0005603435 NetworkManager[49097]: <info>  [1769835198.7035] manager: (tap556173b1-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.704 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.708 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.708 239942 INFO os_vif [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:48:2f,bridge_name='br-int',has_traffic_filtering=True,id=556173b1-34bc-4edc-b3fa-7a144df8b331,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap556173b1-34')#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.774 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.774 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.775 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No VIF found with MAC fa:16:3e:ca:48:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.775 239942 INFO nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Using config drive#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.801 239942 DEBUG nova.storage.rbd_utils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 9a44a647-eae8-41f0-b96c-aa172ac4757a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:18 np0005603435 nova_compute[239938]: 2026-01-31 04:53:18.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.118 239942 DEBUG nova.network.neutron [-] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.142 239942 INFO nova.compute.manager [-] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Took 1.22 seconds to deallocate network for instance.#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.235 239942 INFO nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Creating config drive at /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a/disk.config#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.242 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpg2dkwezw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.284 239942 DEBUG nova.network.neutron [req-b7af800a-2a18-4617-b4b2-f2c5ea061372 req-d8bc76f7-6385-45ae-b9a1-1aad1ca9a352 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Updated VIF entry in instance network info cache for port 556173b1-34bc-4edc-b3fa-7a144df8b331. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.284 239942 DEBUG nova.network.neutron [req-b7af800a-2a18-4617-b4b2-f2c5ea061372 req-d8bc76f7-6385-45ae-b9a1-1aad1ca9a352 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Updating instance_info_cache with network_info: [{"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.287 239942 WARNING nova.volume.cinder [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Attachment 70fc291c-fc1f-4d45-a625-9e7dd3716939 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 70fc291c-fc1f-4d45-a625-9e7dd3716939. (HTTP 404) (Request-ID: req-ad387384-e726-4f96-a63c-89819caaae22)#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.287 239942 INFO nova.compute.manager [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Took 0.14 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.300 239942 DEBUG oslo_concurrency.lockutils [req-b7af800a-2a18-4617-b4b2-f2c5ea061372 req-d8bc76f7-6385-45ae-b9a1-1aad1ca9a352 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-9a44a647-eae8-41f0-b96c-aa172ac4757a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.342 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.343 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.370 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpg2dkwezw" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 214 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.8 KiB/s wr, 36 op/s
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.405 239942 DEBUG nova.storage.rbd_utils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 9a44a647-eae8-41f0-b96c-aa172ac4757a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.410 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a/disk.config 9a44a647-eae8-41f0-b96c-aa172ac4757a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.491 239942 DEBUG oslo_concurrency.processutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.543 239942 DEBUG oslo_concurrency.processutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a/disk.config 9a44a647-eae8-41f0-b96c-aa172ac4757a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.544 239942 INFO nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Deleting local config drive /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a/disk.config because it was imported into RBD.#033[00m
Jan 30 23:53:19 np0005603435 kernel: tap556173b1-34: entered promiscuous mode
Jan 30 23:53:19 np0005603435 NetworkManager[49097]: <info>  [1769835199.5825] manager: (tap556173b1-34): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 30 23:53:19 np0005603435 systemd-udevd[260444]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.585 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:19 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:19Z|00132|binding|INFO|Claiming lport 556173b1-34bc-4edc-b3fa-7a144df8b331 for this chassis.
Jan 30 23:53:19 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:19Z|00133|binding|INFO|556173b1-34bc-4edc-b3fa-7a144df8b331: Claiming fa:16:3e:ca:48:2f 10.100.0.9
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.593 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:48:2f 10.100.0.9'], port_security=['fa:16:3e:ca:48:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9a44a647-eae8-41f0-b96c-aa172ac4757a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e4b6ff09-e0ac-4b5c-a1ae-e4cd0ac951bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=556173b1-34bc-4edc-b3fa-7a144df8b331) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.594 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.594 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 556173b1-34bc-4edc-b3fa-7a144df8b331 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 bound to our chassis#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.597 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:53:19 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:19Z|00134|binding|INFO|Setting lport 556173b1-34bc-4edc-b3fa-7a144df8b331 ovn-installed in OVS
Jan 30 23:53:19 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:19Z|00135|binding|INFO|Setting lport 556173b1-34bc-4edc-b3fa-7a144df8b331 up in Southbound
Jan 30 23:53:19 np0005603435 NetworkManager[49097]: <info>  [1769835199.6012] device (tap556173b1-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:53:19 np0005603435 NetworkManager[49097]: <info>  [1769835199.6024] device (tap556173b1-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.603 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.604 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d1e4bc-3c11-4d0e-875b-9421bf5d06c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.608 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b0cf2db-21 in ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.609 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.610 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b0cf2db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.610 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[af322ed7-b4c2-4dcf-962e-58b07ab69463]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.612 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d9642a-d9e0-4277-abd5-a3ed69ebf98f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.622 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[97130a1d-bb1d-48cc-af7a-37b73d1fbea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 systemd-machined[208030]: New machine qemu-13-instance-0000000d.
Jan 30 23:53:19 np0005603435 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.644 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[84a3e9dc-2b96-403c-836f-c144d98f0810]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.684 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e9c4d8-0842-4923-b357-f4ff8e49e63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 NetworkManager[49097]: <info>  [1769835199.6981] manager: (tap5b0cf2db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.698 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[87d80a69-108d-4d33-82ae-3ae8ce3e3a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.731 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[1acd0151-f2b4-45a7-b7fe-957a43521d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.736 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a27743b7-cf79-4a42-b9b2-7771a55fecf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 NetworkManager[49097]: <info>  [1769835199.7575] device (tap5b0cf2db-20): carrier: link connected
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.762 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd881e1-e817-492e-be18-8e28eb7c3867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.777 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4afa71-dbe1-4590-a2dc-432d8bf54de0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425415, 'reachable_time': 15000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260674, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.794 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9b342b5a-688a-4de0-8cc7-cc89a1718339]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:f719'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425415, 'tstamp': 425415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260675, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.811 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c71cf79f-5ed8-473c-a949-26a0a34c0fde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425415, 'reachable_time': 15000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260676, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.841 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b3734bf7-0f9c-45d6-9805-bb6d50e70882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.853 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.888 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0a66846b-ffd7-4baa-afe1-3d1213d43d9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.891 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.892 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.893 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:19 np0005603435 kernel: tap5b0cf2db-20: entered promiscuous mode
Jan 30 23:53:19 np0005603435 NetworkManager[49097]: <info>  [1769835199.8954] manager: (tap5b0cf2db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.897 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.899 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:19 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:19Z|00136|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.900 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.901 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.902 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[007f4417-c395-44ef-8712-33b055c215bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.903 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:53:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:19.904 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'env', 'PROCESS_TAG=haproxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:53:19 np0005603435 nova_compute[239938]: 2026-01-31 04:53:19.906 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062965549' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.063 239942 DEBUG oslo_concurrency.processutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.069 239942 DEBUG nova.compute.provider_tree [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.092 239942 DEBUG nova.scheduler.client.report [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.136 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.164 239942 INFO nova.scheduler.client.report [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Deleted allocations for instance 983eb240-9938-4bbf-aafb-2562f4738906#033[00m
Jan 30 23:53:20 np0005603435 podman[260746]: 2026-01-31 04:53:20.239701456 +0000 UTC m=+0.065059989 container create ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.274 239942 DEBUG oslo_concurrency.lockutils [None req-6eb73ff4-b693-41ba-bd90-f9d654e0b446 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:20 np0005603435 systemd[1]: Started libpod-conmon-ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00.scope.
Jan 30 23:53:20 np0005603435 podman[260746]: 2026-01-31 04:53:20.212822926 +0000 UTC m=+0.038181519 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:53:20 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:20 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066ed3d2617fe023ba70116df83616de45f2c0f1ee2817a1b49a0004c4ae4bb0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:20 np0005603435 podman[260746]: 2026-01-31 04:53:20.34292054 +0000 UTC m=+0.168279123 container init ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:53:20 np0005603435 podman[260746]: 2026-01-31 04:53:20.349942232 +0000 UTC m=+0.175300775 container start ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:53:20 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[260761]: [NOTICE]   (260765) : New worker (260767) forked
Jan 30 23:53:20 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[260761]: [NOTICE]   (260765) : Loading success.
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.405 239942 DEBUG nova.compute.manager [req-56cc8962-5eac-42ad-8dcc-8cf9323c3793 req-807d03ed-2612-4775-80f7-8090d0d9fc7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received event network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.406 239942 DEBUG oslo_concurrency.lockutils [req-56cc8962-5eac-42ad-8dcc-8cf9323c3793 req-807d03ed-2612-4775-80f7-8090d0d9fc7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "983eb240-9938-4bbf-aafb-2562f4738906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.406 239942 DEBUG oslo_concurrency.lockutils [req-56cc8962-5eac-42ad-8dcc-8cf9323c3793 req-807d03ed-2612-4775-80f7-8090d0d9fc7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.406 239942 DEBUG oslo_concurrency.lockutils [req-56cc8962-5eac-42ad-8dcc-8cf9323c3793 req-807d03ed-2612-4775-80f7-8090d0d9fc7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "983eb240-9938-4bbf-aafb-2562f4738906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.406 239942 DEBUG nova.compute.manager [req-56cc8962-5eac-42ad-8dcc-8cf9323c3793 req-807d03ed-2612-4775-80f7-8090d0d9fc7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] No waiting events found dispatching network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.407 239942 WARNING nova.compute.manager [req-56cc8962-5eac-42ad-8dcc-8cf9323c3793 req-807d03ed-2612-4775-80f7-8090d0d9fc7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received unexpected event network-vif-plugged-bfb7f68c-f1da-410f-b21f-2f029c653727 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.407 239942 DEBUG nova.compute.manager [req-56cc8962-5eac-42ad-8dcc-8cf9323c3793 req-807d03ed-2612-4775-80f7-8090d0d9fc7b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Received event network-vif-deleted-bfb7f68c-f1da-410f-b21f-2f029c653727 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.518 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.889 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.915 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 30 23:53:20 np0005603435 nova_compute[239938]: 2026-01-31 04:53:20.915 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:53:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 184 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 21 KiB/s wr, 33 op/s
Jan 30 23:53:21 np0005603435 nova_compute[239938]: 2026-01-31 04:53:21.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:21 np0005603435 nova_compute[239938]: 2026-01-31 04:53:21.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:21 np0005603435 nova_compute[239938]: 2026-01-31 04:53:21.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:21 np0005603435 nova_compute[239938]: 2026-01-31 04:53:21.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:21 np0005603435 nova_compute[239938]: 2026-01-31 04:53:21.919 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:21 np0005603435 nova_compute[239938]: 2026-01-31 04:53:21.919 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:53:21 np0005603435 nova_compute[239938]: 2026-01-31 04:53:21.919 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.390 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835202.3901062, 9a44a647-eae8-41f0-b96c-aa172ac4757a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.391 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] VM Started (Lifecycle Event)#033[00m
Jan 30 23:53:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661684735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.417 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.422 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835202.3904173, 9a44a647-eae8-41f0-b96c-aa172ac4757a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.422 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.430 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.444 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.448 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.479 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.501 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.501 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.614 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.615 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4434MB free_disk=59.960045292042196GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.615 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.616 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.688 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 9a44a647-eae8-41f0-b96c-aa172ac4757a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.689 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.689 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:53:22 np0005603435 nova_compute[239938]: 2026-01-31 04:53:22.738 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:23.093 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455406016' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:23 np0005603435 nova_compute[239938]: 2026-01-31 04:53:23.278 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:23 np0005603435 nova_compute[239938]: 2026-01-31 04:53:23.283 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:53:23 np0005603435 nova_compute[239938]: 2026-01-31 04:53:23.297 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:53:23 np0005603435 nova_compute[239938]: 2026-01-31 04:53:23.320 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:53:23 np0005603435 nova_compute[239938]: 2026-01-31 04:53:23.321 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 22 KiB/s wr, 81 op/s
Jan 30 23:53:23 np0005603435 nova_compute[239938]: 2026-01-31 04:53:23.703 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:53:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/982712728' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:53:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:53:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/982712728' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:53:24 np0005603435 nova_compute[239938]: 2026-01-31 04:53:24.855 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Jan 30 23:53:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Jan 30 23:53:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.321 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:53:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 130 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 21 KiB/s wr, 61 op/s
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.890 239942 DEBUG nova.compute.manager [req-e3ea10c1-d2e4-477a-89ec-67f8d499315c req-9b53f985-a7e4-4b75-bc21-c3fc97338107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received event network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.890 239942 DEBUG oslo_concurrency.lockutils [req-e3ea10c1-d2e4-477a-89ec-67f8d499315c req-9b53f985-a7e4-4b75-bc21-c3fc97338107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.891 239942 DEBUG oslo_concurrency.lockutils [req-e3ea10c1-d2e4-477a-89ec-67f8d499315c req-9b53f985-a7e4-4b75-bc21-c3fc97338107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.891 239942 DEBUG oslo_concurrency.lockutils [req-e3ea10c1-d2e4-477a-89ec-67f8d499315c req-9b53f985-a7e4-4b75-bc21-c3fc97338107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.891 239942 DEBUG nova.compute.manager [req-e3ea10c1-d2e4-477a-89ec-67f8d499315c req-9b53f985-a7e4-4b75-bc21-c3fc97338107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Processing event network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.892 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.897 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835205.896756, 9a44a647-eae8-41f0-b96c-aa172ac4757a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.897 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.900 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.905 239942 INFO nova.virt.libvirt.driver [-] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Instance spawned successfully.#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.905 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.923 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.934 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.942 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.942 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.943 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.944 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.945 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.946 239942 DEBUG nova.virt.libvirt.driver [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:25 np0005603435 nova_compute[239938]: 2026-01-31 04:53:25.973 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:53:26 np0005603435 nova_compute[239938]: 2026-01-31 04:53:26.000 239942 INFO nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Took 11.11 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:53:26 np0005603435 nova_compute[239938]: 2026-01-31 04:53:26.001 239942 DEBUG nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Jan 30 23:53:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Jan 30 23:53:26 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Jan 30 23:53:26 np0005603435 nova_compute[239938]: 2026-01-31 04:53:26.065 239942 INFO nova.compute.manager [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Took 13.79 seconds to build instance.#033[00m
Jan 30 23:53:26 np0005603435 nova_compute[239938]: 2026-01-31 04:53:26.078 239942 DEBUG oslo_concurrency.lockutils [None req-b7f3fe85-1548-4f24-b2e7-a3174474d654 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:26 np0005603435 podman[260828]: 2026-01-31 04:53:26.125450147 +0000 UTC m=+0.087162371 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Jan 30 23:53:26 np0005603435 podman[260829]: 2026-01-31 04:53:26.16016816 +0000 UTC m=+0.120766566 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 30 23:53:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 88 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 23 KiB/s wr, 117 op/s
Jan 30 23:53:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.015 239942 DEBUG nova.compute.manager [req-26e36cad-58b6-4eec-9b72-fb25d4c78250 req-b8f00e15-f4fe-436e-b190-bce8d392801c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received event network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.017 239942 DEBUG oslo_concurrency.lockutils [req-26e36cad-58b6-4eec-9b72-fb25d4c78250 req-b8f00e15-f4fe-436e-b190-bce8d392801c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.017 239942 DEBUG oslo_concurrency.lockutils [req-26e36cad-58b6-4eec-9b72-fb25d4c78250 req-b8f00e15-f4fe-436e-b190-bce8d392801c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.017 239942 DEBUG oslo_concurrency.lockutils [req-26e36cad-58b6-4eec-9b72-fb25d4c78250 req-b8f00e15-f4fe-436e-b190-bce8d392801c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.018 239942 DEBUG nova.compute.manager [req-26e36cad-58b6-4eec-9b72-fb25d4c78250 req-b8f00e15-f4fe-436e-b190-bce8d392801c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] No waiting events found dispatching network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.018 239942 WARNING nova.compute.manager [req-26e36cad-58b6-4eec-9b72-fb25d4c78250 req-b8f00e15-f4fe-436e-b190-bce8d392801c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received unexpected event network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:53:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Jan 30 23:53:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Jan 30 23:53:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.334 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "9a44a647-eae8-41f0-b96c-aa172ac4757a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.335 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.335 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.336 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.336 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.338 239942 INFO nova.compute.manager [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Terminating instance#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.340 239942 DEBUG nova.compute.manager [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:53:28 np0005603435 kernel: tap556173b1-34 (unregistering): left promiscuous mode
Jan 30 23:53:28 np0005603435 NetworkManager[49097]: <info>  [1769835208.3810] device (tap556173b1-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:53:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:28Z|00137|binding|INFO|Releasing lport 556173b1-34bc-4edc-b3fa-7a144df8b331 from this chassis (sb_readonly=0)
Jan 30 23:53:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:28Z|00138|binding|INFO|Setting lport 556173b1-34bc-4edc-b3fa-7a144df8b331 down in Southbound
Jan 30 23:53:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:28Z|00139|binding|INFO|Removing iface tap556173b1-34 ovn-installed in OVS
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.389 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.397 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:48:2f 10.100.0.9'], port_security=['fa:16:3e:ca:48:2f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9a44a647-eae8-41f0-b96c-aa172ac4757a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e4b6ff09-e0ac-4b5c-a1ae-e4cd0ac951bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=556173b1-34bc-4edc-b3fa-7a144df8b331) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.399 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 556173b1-34bc-4edc-b3fa-7a144df8b331 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.402 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.403 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.404 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4de995db-a04d-4ae9-af7b-1bafd4c6025d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.405 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace which is not needed anymore#033[00m
Jan 30 23:53:28 np0005603435 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 30 23:53:28 np0005603435 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 3.268s CPU time.
Jan 30 23:53:28 np0005603435 systemd-machined[208030]: Machine qemu-13-instance-0000000d terminated.
Jan 30 23:53:28 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[260761]: [NOTICE]   (260765) : haproxy version is 2.8.14-c23fe91
Jan 30 23:53:28 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[260761]: [NOTICE]   (260765) : path to executable is /usr/sbin/haproxy
Jan 30 23:53:28 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[260761]: [WARNING]  (260765) : Exiting Master process...
Jan 30 23:53:28 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[260761]: [WARNING]  (260765) : Exiting Master process...
Jan 30 23:53:28 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[260761]: [ALERT]    (260765) : Current worker (260767) exited with code 143 (Terminated)
Jan 30 23:53:28 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[260761]: [WARNING]  (260765) : All workers exited. Exiting... (0)
Jan 30 23:53:28 np0005603435 systemd[1]: libpod-ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00.scope: Deactivated successfully.
Jan 30 23:53:28 np0005603435 podman[260899]: 2026-01-31 04:53:28.552456313 +0000 UTC m=+0.049438845 container died ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.574 239942 INFO nova.virt.libvirt.driver [-] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Instance destroyed successfully.#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.575 239942 DEBUG nova.objects.instance [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'resources' on Instance uuid 9a44a647-eae8-41f0-b96c-aa172ac4757a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:28 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00-userdata-shm.mount: Deactivated successfully.
Jan 30 23:53:28 np0005603435 systemd[1]: var-lib-containers-storage-overlay-066ed3d2617fe023ba70116df83616de45f2c0f1ee2817a1b49a0004c4ae4bb0-merged.mount: Deactivated successfully.
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.592 239942 DEBUG nova.virt.libvirt.vif [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:53:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-431352823',display_name='tempest-TestVolumeBootPattern-server-431352823',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-431352823',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:53:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-o9i7wp7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:53:26Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=9a44a647-eae8-41f0-b96c-aa172ac4757a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.592 239942 DEBUG nova.network.os_vif_util [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "556173b1-34bc-4edc-b3fa-7a144df8b331", "address": "fa:16:3e:ca:48:2f", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap556173b1-34", "ovs_interfaceid": "556173b1-34bc-4edc-b3fa-7a144df8b331", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:28 np0005603435 podman[260899]: 2026-01-31 04:53:28.593832108 +0000 UTC m=+0.090814610 container cleanup ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.593 239942 DEBUG nova.network.os_vif_util [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:48:2f,bridge_name='br-int',has_traffic_filtering=True,id=556173b1-34bc-4edc-b3fa-7a144df8b331,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap556173b1-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.594 239942 DEBUG os_vif [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:48:2f,bridge_name='br-int',has_traffic_filtering=True,id=556173b1-34bc-4edc-b3fa-7a144df8b331,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap556173b1-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.596 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.596 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap556173b1-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.599 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.600 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:53:28 np0005603435 systemd[1]: libpod-conmon-ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00.scope: Deactivated successfully.
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.602 239942 INFO os_vif [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:48:2f,bridge_name='br-int',has_traffic_filtering=True,id=556173b1-34bc-4edc-b3fa-7a144df8b331,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap556173b1-34')#033[00m
Jan 30 23:53:28 np0005603435 podman[260939]: 2026-01-31 04:53:28.654565259 +0000 UTC m=+0.038672520 container remove ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.658 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b2db9332-cb42-4fce-8c26-bde3e03772c0]: (4, ('Sat Jan 31 04:53:28 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00)\nce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00\nSat Jan 31 04:53:28 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (ce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00)\nce0968c6a2f4e543e2c457e13d9adda5e180e839ac8a0b9dfd73a7e466c9ae00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.660 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3a98d944-193e-4a2c-8522-2b742b7fb755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.661 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.662 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:28 np0005603435 kernel: tap5b0cf2db-20: left promiscuous mode
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.668 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.670 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[242c92cc-64bc-4868-9464-5571a0050935]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.684 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[dbcb76f6-8473-467c-9e67-3489c03b22c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.685 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[aea9f220-d207-4004-a9a8-98826fe18e2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.700 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cca70a28-acbf-49fa-a0c4-9f839101b87f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425408, 'reachable_time': 26550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260972, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:28 np0005603435 systemd[1]: run-netns-ovnmeta\x2d5b0cf2db\x2d2e35\x2d41fa\x2d9783\x2d30f0fe6ea7a3.mount: Deactivated successfully.
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.703 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:53:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:28.703 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[3094ab0e-be21-4ea7-a8cd-e2bbdcadb8fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.745 239942 INFO nova.virt.libvirt.driver [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Deleting instance files /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a_del#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.746 239942 INFO nova.virt.libvirt.driver [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Deletion of /var/lib/nova/instances/9a44a647-eae8-41f0-b96c-aa172ac4757a_del complete#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.815 239942 INFO nova.compute.manager [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.816 239942 DEBUG oslo.service.loopingcall [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.816 239942 DEBUG nova.compute.manager [-] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:53:28 np0005603435 nova_compute[239938]: 2026-01-31 04:53:28.817 239942 DEBUG nova.network.neutron [-] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:53:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 88 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.0 KiB/s wr, 81 op/s
Jan 30 23:53:29 np0005603435 nova_compute[239938]: 2026-01-31 04:53:29.608 239942 DEBUG nova.network.neutron [-] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:29 np0005603435 nova_compute[239938]: 2026-01-31 04:53:29.631 239942 INFO nova.compute.manager [-] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Took 0.81 seconds to deallocate network for instance.#033[00m
Jan 30 23:53:29 np0005603435 nova_compute[239938]: 2026-01-31 04:53:29.674 239942 DEBUG nova.compute.manager [req-e89f8efd-ed2c-4a0a-b0c3-1325724cd8b7 req-6050cde2-616a-4ddf-8332-331e63ef8c21 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received event network-vif-deleted-556173b1-34bc-4edc-b3fa-7a144df8b331 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:29 np0005603435 nova_compute[239938]: 2026-01-31 04:53:29.808 239942 INFO nova.compute.manager [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:53:29 np0005603435 nova_compute[239938]: 2026-01-31 04:53:29.856 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:29 np0005603435 nova_compute[239938]: 2026-01-31 04:53:29.857 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:29 np0005603435 nova_compute[239938]: 2026-01-31 04:53:29.857 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:29 np0005603435 nova_compute[239938]: 2026-01-31 04:53:29.925 239942 DEBUG oslo_concurrency.processutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.093 239942 DEBUG nova.compute.manager [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received event network-vif-unplugged-556173b1-34bc-4edc-b3fa-7a144df8b331 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.094 239942 DEBUG oslo_concurrency.lockutils [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.094 239942 DEBUG oslo_concurrency.lockutils [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.094 239942 DEBUG oslo_concurrency.lockutils [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.094 239942 DEBUG nova.compute.manager [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] No waiting events found dispatching network-vif-unplugged-556173b1-34bc-4edc-b3fa-7a144df8b331 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.095 239942 WARNING nova.compute.manager [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received unexpected event network-vif-unplugged-556173b1-34bc-4edc-b3fa-7a144df8b331 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.095 239942 DEBUG nova.compute.manager [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received event network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.095 239942 DEBUG oslo_concurrency.lockutils [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.095 239942 DEBUG oslo_concurrency.lockutils [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.095 239942 DEBUG oslo_concurrency.lockutils [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.095 239942 DEBUG nova.compute.manager [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] No waiting events found dispatching network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.095 239942 WARNING nova.compute.manager [req-bf6fe5b6-2514-4014-9036-4c8d39ff652e req-298a7d3a-6f62-49e1-b5a1-ec2b3318b9c0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Received unexpected event network-vif-plugged-556173b1-34bc-4edc-b3fa-7a144df8b331 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:53:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3785694351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.517 239942 DEBUG oslo_concurrency.processutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.523 239942 DEBUG nova.compute.provider_tree [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.546 239942 DEBUG nova.scheduler.client.report [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.580 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.610 239942 INFO nova.scheduler.client.report [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Deleted allocations for instance 9a44a647-eae8-41f0-b96c-aa172ac4757a#033[00m
Jan 30 23:53:30 np0005603435 nova_compute[239938]: 2026-01-31 04:53:30.718 239942 DEBUG oslo_concurrency.lockutils [None req-2a04ad38-b179-4dd7-9b64-43adfc258b85 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "9a44a647-eae8-41f0-b96c-aa172ac4757a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 88 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.8 KiB/s wr, 79 op/s
Jan 30 23:53:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:53:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3548544533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:53:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:53:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3548544533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.497 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835197.4966729, 983eb240-9938-4bbf-aafb-2562f4738906 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.498 239942 INFO nova.compute.manager [-] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.522 239942 DEBUG nova.compute.manager [None req-3bad4aac-2733-4031-9448-20e03f0bbfc2 - - - - - -] [instance: 983eb240-9938-4bbf-aafb-2562f4738906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.531 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.531 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.548 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.617 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.618 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.625 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.626 239942 INFO nova.compute.claims [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:53:32 np0005603435 nova_compute[239938]: 2026-01-31 04:53:32.722 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Jan 30 23:53:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Jan 30 23:53:33 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Jan 30 23:53:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:53:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/542473322' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:53:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:53:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/542473322' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:53:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2549324093' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.298 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.305 239942 DEBUG nova.compute.provider_tree [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.327 239942 DEBUG nova.scheduler.client.report [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.365 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.366 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:53:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 88 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 KiB/s wr, 101 op/s
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.432 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.433 239942 DEBUG nova.network.neutron [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.457 239942 INFO nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.480 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.594 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.596 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.596 239942 INFO nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Creating image(s)#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.624 239942 DEBUG nova.storage.rbd_utils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 961014c5-246e-4bd6-b7e8-86d49599034a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.652 239942 DEBUG nova.storage.rbd_utils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 961014c5-246e-4bd6-b7e8-86d49599034a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.679 239942 DEBUG nova.storage.rbd_utils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 961014c5-246e-4bd6-b7e8-86d49599034a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.684 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.703 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.741 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.742 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.743 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.744 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.769 239942 DEBUG nova.storage.rbd_utils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 961014c5-246e-4bd6-b7e8-86d49599034a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.773 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 961014c5-246e-4bd6-b7e8-86d49599034a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:33 np0005603435 nova_compute[239938]: 2026-01-31 04:53:33.893 239942 DEBUG nova.policy [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3612e26aca645d895f083e0d58dfd69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5ce1f57546045d891de80fbaff2512b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.042 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 961014c5-246e-4bd6-b7e8-86d49599034a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.113 239942 DEBUG nova.storage.rbd_utils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] resizing rbd image 961014c5-246e-4bd6-b7e8-86d49599034a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.191 239942 DEBUG nova.objects.instance [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'migration_context' on Instance uuid 961014c5-246e-4bd6-b7e8-86d49599034a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.204 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.204 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Ensure instance console log exists: /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.205 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.205 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.206 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:34 np0005603435 nova_compute[239938]: 2026-01-31 04:53:34.918 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:53:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:53:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 105 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 1.2 MiB/s wr, 73 op/s
Jan 30 23:53:35 np0005603435 nova_compute[239938]: 2026-01-31 04:53:35.445 239942 DEBUG nova.network.neutron [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Successfully created port: 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:53:35 np0005603435 podman[261328]: 2026-01-31 04:53:35.768662388 +0000 UTC m=+0.058293833 container create 03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:53:35 np0005603435 systemd[1]: Started libpod-conmon-03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d.scope.
Jan 30 23:53:35 np0005603435 podman[261328]: 2026-01-31 04:53:35.741591273 +0000 UTC m=+0.031222748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:53:35 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:35 np0005603435 podman[261328]: 2026-01-31 04:53:35.858688548 +0000 UTC m=+0.148320053 container init 03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:53:35 np0005603435 podman[261328]: 2026-01-31 04:53:35.867051403 +0000 UTC m=+0.156682848 container start 03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kepler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:53:35 np0005603435 podman[261328]: 2026-01-31 04:53:35.870869707 +0000 UTC m=+0.160501142 container attach 03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:53:35 np0005603435 distracted_kepler[261344]: 167 167
Jan 30 23:53:35 np0005603435 systemd[1]: libpod-03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d.scope: Deactivated successfully.
Jan 30 23:53:35 np0005603435 podman[261328]: 2026-01-31 04:53:35.87466796 +0000 UTC m=+0.164299395 container died 03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:53:35 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0107154b0d6baf8906b0bbf2cea23854013d309b41a434a721b2a9481f87fc0b-merged.mount: Deactivated successfully.
Jan 30 23:53:35 np0005603435 podman[261328]: 2026-01-31 04:53:35.920844344 +0000 UTC m=+0.210475789 container remove 03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_kepler, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:53:35 np0005603435 systemd[1]: libpod-conmon-03ea4bfa7582924302884ce286340dd47df1abfda046ea1e9206c3a7189dc93d.scope: Deactivated successfully.
Jan 30 23:53:36 np0005603435 podman[261367]: 2026-01-31 04:53:36.112972621 +0000 UTC m=+0.059765788 container create 9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_faraday, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:53:36 np0005603435 systemd[1]: Started libpod-conmon-9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11.scope.
Jan 30 23:53:36 np0005603435 podman[261367]: 2026-01-31 04:53:36.089251479 +0000 UTC m=+0.036044696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:53:36 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f9f94a781b85a8e9795478e081089deab580c88412b3a03b3520f248cbb1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f9f94a781b85a8e9795478e081089deab580c88412b3a03b3520f248cbb1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f9f94a781b85a8e9795478e081089deab580c88412b3a03b3520f248cbb1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f9f94a781b85a8e9795478e081089deab580c88412b3a03b3520f248cbb1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1f9f94a781b85a8e9795478e081089deab580c88412b3a03b3520f248cbb1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:36 np0005603435 podman[261367]: 2026-01-31 04:53:36.230147338 +0000 UTC m=+0.176940575 container init 9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:53:36 np0005603435 podman[261367]: 2026-01-31 04:53:36.243534807 +0000 UTC m=+0.190327984 container start 9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_faraday, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:53:36 np0005603435 podman[261367]: 2026-01-31 04:53:36.247019912 +0000 UTC m=+0.193813089 container attach 9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:53:36 np0005603435 nova_compute[239938]: 2026-01-31 04:53:36.370 239942 DEBUG nova.network.neutron [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Successfully updated port: 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:53:36 np0005603435 nova_compute[239938]: 2026-01-31 04:53:36.392 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:36 np0005603435 nova_compute[239938]: 2026-01-31 04:53:36.393 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquired lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:36 np0005603435 nova_compute[239938]: 2026-01-31 04:53:36.393 239942 DEBUG nova.network.neutron [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:53:36 np0005603435 nova_compute[239938]: 2026-01-31 04:53:36.475 239942 DEBUG nova.compute.manager [req-387b594d-aa20-4985-ac99-ac9c0ead8f75 req-dd59605e-15a8-4da0-b7cc-f3c2f7f01a57 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received event network-changed-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:36 np0005603435 nova_compute[239938]: 2026-01-31 04:53:36.476 239942 DEBUG nova.compute.manager [req-387b594d-aa20-4985-ac99-ac9c0ead8f75 req-dd59605e-15a8-4da0-b7cc-f3c2f7f01a57 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Refreshing instance network info cache due to event network-changed-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:53:36 np0005603435 nova_compute[239938]: 2026-01-31 04:53:36.476 239942 DEBUG oslo_concurrency.lockutils [req-387b594d-aa20-4985-ac99-ac9c0ead8f75 req-dd59605e-15a8-4da0-b7cc-f3c2f7f01a57 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:36 np0005603435 blissful_faraday[261383]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:53:36 np0005603435 blissful_faraday[261383]: --> All data devices are unavailable
Jan 30 23:53:36 np0005603435 systemd[1]: libpod-9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11.scope: Deactivated successfully.
Jan 30 23:53:36 np0005603435 podman[261367]: 2026-01-31 04:53:36.715986326 +0000 UTC m=+0.662779493 container died 9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:53:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4e1f9f94a781b85a8e9795478e081089deab580c88412b3a03b3520f248cbb1a-merged.mount: Deactivated successfully.
Jan 30 23:53:36 np0005603435 nova_compute[239938]: 2026-01-31 04:53:36.748 239942 DEBUG nova.network.neutron [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:53:36 np0005603435 podman[261367]: 2026-01-31 04:53:36.761262857 +0000 UTC m=+0.708055994 container remove 9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:53:36 np0005603435 systemd[1]: libpod-conmon-9ff58a72dc10cf21d577e8cae27da7c15bf1a0f27766d806f800d6a611891e11.scope: Deactivated successfully.
Jan 30 23:53:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:53:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:53:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:53:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:53:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:53:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:53:37 np0005603435 podman[261480]: 2026-01-31 04:53:37.172251578 +0000 UTC m=+0.028588863 container create 7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:53:37 np0005603435 systemd[1]: Started libpod-conmon-7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe.scope.
Jan 30 23:53:37 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:37 np0005603435 podman[261480]: 2026-01-31 04:53:37.234747692 +0000 UTC m=+0.091084987 container init 7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:53:37 np0005603435 podman[261480]: 2026-01-31 04:53:37.238955445 +0000 UTC m=+0.095292760 container start 7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:53:37 np0005603435 podman[261480]: 2026-01-31 04:53:37.242150284 +0000 UTC m=+0.098487599 container attach 7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 30 23:53:37 np0005603435 quizzical_nash[261496]: 167 167
Jan 30 23:53:37 np0005603435 systemd[1]: libpod-7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe.scope: Deactivated successfully.
Jan 30 23:53:37 np0005603435 podman[261480]: 2026-01-31 04:53:37.244487431 +0000 UTC m=+0.100824756 container died 7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:53:37 np0005603435 podman[261480]: 2026-01-31 04:53:37.158966262 +0000 UTC m=+0.015303577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:53:37 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8dc760e8c69d8717f22ac06c96a228cd5168fc23200d27271ca3ac50b316a531-merged.mount: Deactivated successfully.
Jan 30 23:53:37 np0005603435 podman[261480]: 2026-01-31 04:53:37.282903314 +0000 UTC m=+0.139240619 container remove 7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:53:37 np0005603435 systemd[1]: libpod-conmon-7f58f2154002bdcaa985cb2f7f1be2096724d6f5337c931f76ca39f77cbdbfbe.scope: Deactivated successfully.
Jan 30 23:53:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 92 op/s
Jan 30 23:53:37 np0005603435 podman[261520]: 2026-01-31 04:53:37.439461298 +0000 UTC m=+0.061695276 container create 0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:53:37 np0005603435 systemd[1]: Started libpod-conmon-0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393.scope.
Jan 30 23:53:37 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:37 np0005603435 podman[261520]: 2026-01-31 04:53:37.411288717 +0000 UTC m=+0.033522745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:53:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d82ae1d600c77c3c603c2bcc0cd17b1f1ed827d7e3687b5b262776fec4dcf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d82ae1d600c77c3c603c2bcc0cd17b1f1ed827d7e3687b5b262776fec4dcf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d82ae1d600c77c3c603c2bcc0cd17b1f1ed827d7e3687b5b262776fec4dcf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:37 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d82ae1d600c77c3c603c2bcc0cd17b1f1ed827d7e3687b5b262776fec4dcf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:37 np0005603435 podman[261520]: 2026-01-31 04:53:37.532464551 +0000 UTC m=+0.154698519 container init 0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 30 23:53:37 np0005603435 podman[261520]: 2026-01-31 04:53:37.545365448 +0000 UTC m=+0.167599396 container start 0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:53:37 np0005603435 podman[261520]: 2026-01-31 04:53:37.549944131 +0000 UTC m=+0.172178149 container attach 0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.572 239942 DEBUG nova.network.neutron [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Updating instance_info_cache with network_info: [{"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.600 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Releasing lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.601 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Instance network_info: |[{"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.601 239942 DEBUG oslo_concurrency.lockutils [req-387b594d-aa20-4985-ac99-ac9c0ead8f75 req-dd59605e-15a8-4da0-b7cc-f3c2f7f01a57 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.602 239942 DEBUG nova.network.neutron [req-387b594d-aa20-4985-ac99-ac9c0ead8f75 req-dd59605e-15a8-4da0-b7cc-f3c2f7f01a57 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Refreshing network info cache for port 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.606 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Start _get_guest_xml network_info=[{"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.614 239942 WARNING nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.622 239942 DEBUG nova.virt.libvirt.host [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.623 239942 DEBUG nova.virt.libvirt.host [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.629 239942 DEBUG nova.virt.libvirt.host [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.630 239942 DEBUG nova.virt.libvirt.host [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.630 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.631 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.632 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.632 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.632 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.633 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.633 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.633 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.634 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.634 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.635 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.635 239942 DEBUG nova.virt.hardware [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:53:37 np0005603435 nova_compute[239938]: 2026-01-31 04:53:37.639 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]: {
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:    "0": [
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:        {
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "devices": [
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "/dev/loop3"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            ],
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_name": "ceph_lv0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_size": "21470642176",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "name": "ceph_lv0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "tags": {
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cluster_name": "ceph",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.crush_device_class": "",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.encrypted": "0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.objectstore": "bluestore",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osd_id": "0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.type": "block",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.vdo": "0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.with_tpm": "0"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            },
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "type": "block",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "vg_name": "ceph_vg0"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:        }
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:    ],
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:    "1": [
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:        {
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "devices": [
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "/dev/loop4"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            ],
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_name": "ceph_lv1",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_size": "21470642176",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "name": "ceph_lv1",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "tags": {
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cluster_name": "ceph",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.crush_device_class": "",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.encrypted": "0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.objectstore": "bluestore",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osd_id": "1",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.type": "block",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.vdo": "0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.with_tpm": "0"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            },
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "type": "block",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "vg_name": "ceph_vg1"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:        }
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:    ],
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:    "2": [
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:        {
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "devices": [
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "/dev/loop5"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            ],
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_name": "ceph_lv2",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_size": "21470642176",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "name": "ceph_lv2",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "tags": {
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.cluster_name": "ceph",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.crush_device_class": "",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.encrypted": "0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.objectstore": "bluestore",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osd_id": "2",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.type": "block",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.vdo": "0",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:                "ceph.with_tpm": "0"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            },
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "type": "block",
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:            "vg_name": "ceph_vg2"
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:        }
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]:    ]
Jan 30 23:53:37 np0005603435 youthful_goldwasser[261536]: }
Jan 30 23:53:37 np0005603435 systemd[1]: libpod-0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393.scope: Deactivated successfully.
Jan 30 23:53:37 np0005603435 podman[261520]: 2026-01-31 04:53:37.83097655 +0000 UTC m=+0.453210498 container died 0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:53:37 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d5d82ae1d600c77c3c603c2bcc0cd17b1f1ed827d7e3687b5b262776fec4dcf0-merged.mount: Deactivated successfully.
Jan 30 23:53:37 np0005603435 podman[261520]: 2026-01-31 04:53:37.874754865 +0000 UTC m=+0.496988813 container remove 0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:53:37 np0005603435 systemd[1]: libpod-conmon-0c97a239a494ebd39074e27960ccce6fef6d8f94b862aa4818ac3e96741fe393.scope: Deactivated successfully.
Jan 30 23:53:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464033565' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.243 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.276 239942 DEBUG nova.storage.rbd_utils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 961014c5-246e-4bd6-b7e8-86d49599034a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.281 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:38 np0005603435 podman[261660]: 2026-01-31 04:53:38.334143614 +0000 UTC m=+0.050510811 container create f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:53:38 np0005603435 systemd[1]: Started libpod-conmon-f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19.scope.
Jan 30 23:53:38 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:38 np0005603435 podman[261660]: 2026-01-31 04:53:38.313924557 +0000 UTC m=+0.030291774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:53:38 np0005603435 podman[261660]: 2026-01-31 04:53:38.412782714 +0000 UTC m=+0.129149981 container init f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:53:38 np0005603435 podman[261660]: 2026-01-31 04:53:38.419893369 +0000 UTC m=+0.136260576 container start f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:53:38 np0005603435 modest_bose[261681]: 167 167
Jan 30 23:53:38 np0005603435 systemd[1]: libpod-f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19.scope: Deactivated successfully.
Jan 30 23:53:38 np0005603435 podman[261660]: 2026-01-31 04:53:38.424328008 +0000 UTC m=+0.140695275 container attach f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bose, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:53:38 np0005603435 podman[261660]: 2026-01-31 04:53:38.424941833 +0000 UTC m=+0.141309040 container died f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 30 23:53:38 np0005603435 systemd[1]: var-lib-containers-storage-overlay-c92519095711f7b3febaa8acc22ba11ea8114a531f3bd9edf0cda4709186759e-merged.mount: Deactivated successfully.
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.705 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1884204879' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:38 np0005603435 podman[261660]: 2026-01-31 04:53:38.881127982 +0000 UTC m=+0.597495179 container remove f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 30 23:53:38 np0005603435 systemd[1]: libpod-conmon-f1300bba77d4a67f4fb7088c9ac9d8ff4bbb61dd275f780d48a3ce34e8b63e19.scope: Deactivated successfully.
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.893 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.894 239942 DEBUG nova.virt.libvirt.vif [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:53:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1283462290',display_name='tempest-VolumesSnapshotTestJSON-instance-1283462290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1283462290',id=14,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOlEf2eEu1YgJYAKQ2o/udbNnsFo6lie3hHqiLJVuWBRQsmg3oD8c6k+QIGqtXaYo4wrW2uri+A3vSiljyf1HCUwxZlS+9pWO3GBxlWISzNrJl1vnewd8jiRr9epbAuQOQ==',key_name='tempest-keypair-1251668977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5ce1f57546045d891de80fbaff2512b',ramdisk_id='',reservation_id='r-07m0n3f4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-541584434',owner_user_name='tempest-VolumesSnapshotTestJSON-541584434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:53:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3612e26aca645d895f083e0d58dfd69',uuid=961014c5-246e-4bd6-b7e8-86d49599034a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.894 239942 DEBUG nova.network.os_vif_util [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converting VIF {"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.895 239942 DEBUG nova.network.os_vif_util [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:94:43,bridge_name='br-int',has_traffic_filtering=True,id=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfbe40d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.896 239942 DEBUG nova.objects.instance [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'pci_devices' on Instance uuid 961014c5-246e-4bd6-b7e8-86d49599034a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.914 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <uuid>961014c5-246e-4bd6-b7e8-86d49599034a</uuid>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <name>instance-0000000e</name>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-1283462290</nova:name>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:53:37</nova:creationTime>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <nova:user uuid="d3612e26aca645d895f083e0d58dfd69">tempest-VolumesSnapshotTestJSON-541584434-project-member</nova:user>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <nova:project uuid="f5ce1f57546045d891de80fbaff2512b">tempest-VolumesSnapshotTestJSON-541584434</nova:project>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <nova:port uuid="0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <entry name="serial">961014c5-246e-4bd6-b7e8-86d49599034a</entry>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <entry name="uuid">961014c5-246e-4bd6-b7e8-86d49599034a</entry>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/961014c5-246e-4bd6-b7e8-86d49599034a_disk">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/961014c5-246e-4bd6-b7e8-86d49599034a_disk.config">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:a9:94:43"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <target dev="tap0dfbe40d-2b"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a/console.log" append="off"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:53:38 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:53:38 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:53:38 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:53:38 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.915 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Preparing to wait for external event network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.916 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.916 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.916 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.917 239942 DEBUG nova.virt.libvirt.vif [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:53:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1283462290',display_name='tempest-VolumesSnapshotTestJSON-instance-1283462290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1283462290',id=14,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOlEf2eEu1YgJYAKQ2o/udbNnsFo6lie3hHqiLJVuWBRQsmg3oD8c6k+QIGqtXaYo4wrW2uri+A3vSiljyf1HCUwxZlS+9pWO3GBxlWISzNrJl1vnewd8jiRr9epbAuQOQ==',key_name='tempest-keypair-1251668977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5ce1f57546045d891de80fbaff2512b',ramdisk_id='',reservation_id='r-07m0n3f4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-541584434',owner_user_name='tempest-VolumesSnapshotTestJSON-541584434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:53:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3612e26aca645d895f083e0d58dfd69',uuid=961014c5-246e-4bd6-b7e8-86d49599034a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.917 239942 DEBUG nova.network.os_vif_util [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converting VIF {"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.918 239942 DEBUG nova.network.os_vif_util [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:94:43,bridge_name='br-int',has_traffic_filtering=True,id=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfbe40d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.918 239942 DEBUG os_vif [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:94:43,bridge_name='br-int',has_traffic_filtering=True,id=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfbe40d-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.918 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.919 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.919 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.922 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.922 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0dfbe40d-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.923 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0dfbe40d-2b, col_values=(('external_ids', {'iface-id': '0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:94:43', 'vm-uuid': '961014c5-246e-4bd6-b7e8-86d49599034a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.924 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:38 np0005603435 NetworkManager[49097]: <info>  [1769835218.9251] manager: (tap0dfbe40d-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.927 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.929 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.929 239942 INFO os_vif [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:94:43,bridge_name='br-int',has_traffic_filtering=True,id=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfbe40d-2b')#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.994 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.995 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.995 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No VIF found with MAC fa:16:3e:a9:94:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:53:38 np0005603435 nova_compute[239938]: 2026-01-31 04:53:38.996 239942 INFO nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Using config drive#033[00m
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.029 239942 DEBUG nova.storage.rbd_utils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 961014c5-246e-4bd6-b7e8-86d49599034a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:39 np0005603435 podman[261729]: 2026-01-31 04:53:39.048601134 +0000 UTC m=+0.042880764 container create 37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:53:39 np0005603435 systemd[1]: Started libpod-conmon-37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a.scope.
Jan 30 23:53:39 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ec333bfd4112dc92bf72f65133daf2a93dbcccb2bb3c6cdf4e392317502a9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ec333bfd4112dc92bf72f65133daf2a93dbcccb2bb3c6cdf4e392317502a9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ec333bfd4112dc92bf72f65133daf2a93dbcccb2bb3c6cdf4e392317502a9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ec333bfd4112dc92bf72f65133daf2a93dbcccb2bb3c6cdf4e392317502a9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:39 np0005603435 podman[261729]: 2026-01-31 04:53:39.033406271 +0000 UTC m=+0.027685921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:53:39 np0005603435 podman[261729]: 2026-01-31 04:53:39.14129771 +0000 UTC m=+0.135577340 container init 37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 30 23:53:39 np0005603435 podman[261729]: 2026-01-31 04:53:39.158850711 +0000 UTC m=+0.153130361 container start 37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:53:39 np0005603435 podman[261729]: 2026-01-31 04:53:39.165311409 +0000 UTC m=+0.159591019 container attach 37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:53:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 134 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 85 op/s
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.485 239942 INFO nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Creating config drive at /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a/disk.config#033[00m
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.491 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz3zjzlp4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.615 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz3zjzlp4" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.650 239942 DEBUG nova.storage.rbd_utils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] rbd image 961014c5-246e-4bd6-b7e8-86d49599034a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.657 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a/disk.config 961014c5-246e-4bd6-b7e8-86d49599034a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.786 239942 DEBUG oslo_concurrency.processutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a/disk.config 961014c5-246e-4bd6-b7e8-86d49599034a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.787 239942 INFO nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Deleting local config drive /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a/disk.config because it was imported into RBD.#033[00m
Jan 30 23:53:39 np0005603435 kernel: tap0dfbe40d-2b: entered promiscuous mode
Jan 30 23:53:39 np0005603435 NetworkManager[49097]: <info>  [1769835219.8309] manager: (tap0dfbe40d-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Jan 30 23:53:39 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:39Z|00140|binding|INFO|Claiming lport 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 for this chassis.
Jan 30 23:53:39 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:39Z|00141|binding|INFO|0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7: Claiming fa:16:3e:a9:94:43 10.100.0.10
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.830 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.837 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:94:43 10.100.0.10'], port_security=['fa:16:3e:a9:94:43 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '961014c5-246e-4bd6-b7e8-86d49599034a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5ce1f57546045d891de80fbaff2512b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd79a6d46-298c-47b1-928a-16b62ca8df21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa479721-2329-4784-af95-25b103421212, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.838 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 in datapath 45b5ded5-5fe4-488c-aa97-cad6ca9b361e bound to our chassis#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.840 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45b5ded5-5fe4-488c-aa97-cad6ca9b361e#033[00m
Jan 30 23:53:39 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:39Z|00142|binding|INFO|Setting lport 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 ovn-installed in OVS
Jan 30 23:53:39 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:39Z|00143|binding|INFO|Setting lport 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 up in Southbound
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.843 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.849 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[17003b99-2913-49dc-af03-57d7e9afec2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.850 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45b5ded5-51 in ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.852 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45b5ded5-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.852 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[798d0bef-3329-471b-b291-293cde3f3aa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.854 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c2032f-d12b-4063-8ead-ac5176332fb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 systemd-machined[208030]: New machine qemu-14-instance-0000000e.
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.863 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[2934d16b-1433-4388-aae1-60c52ea36841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.873 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[eca9eee4-a44e-4b1b-a00d-3edf33032a08]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 systemd-udevd[261893]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:53:39 np0005603435 NetworkManager[49097]: <info>  [1769835219.8961] device (tap0dfbe40d-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:53:39 np0005603435 NetworkManager[49097]: <info>  [1769835219.8966] device (tap0dfbe40d-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:53:39 np0005603435 lvm[261899]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:53:39 np0005603435 lvm[261902]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:53:39 np0005603435 lvm[261902]: VG ceph_vg0 finished
Jan 30 23:53:39 np0005603435 lvm[261899]: VG ceph_vg2 finished
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.894 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[590d3ac2-c13f-44b2-b5b4-cb59abf98e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.901 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[083c7064-8153-43d5-b9d8-95875074e3b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 NetworkManager[49097]: <info>  [1769835219.9022] manager: (tap45b5ded5-50): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Jan 30 23:53:39 np0005603435 lvm[261900]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:53:39 np0005603435 lvm[261900]: VG ceph_vg1 finished
Jan 30 23:53:39 np0005603435 nova_compute[239938]: 2026-01-31 04:53:39.921 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.933 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[1acce7e3-28ed-4e2f-843e-aa00e8c58c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 gifted_napier[261762]: {}
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.945 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[47e34618-402d-41f7-8e5e-a248bd99eb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 NetworkManager[49097]: <info>  [1769835219.9641] device (tap45b5ded5-50): carrier: link connected
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.970 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[f59144b1-78dd-4bc6-b8f0-a8a452ee2bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:39 np0005603435 systemd[1]: libpod-37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a.scope: Deactivated successfully.
Jan 30 23:53:39 np0005603435 systemd[1]: libpod-37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a.scope: Consumed 1.165s CPU time.
Jan 30 23:53:39 np0005603435 podman[261729]: 2026-01-31 04:53:39.978906774 +0000 UTC m=+0.973186414 container died 37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:53:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:39.993 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bcdd63-cdbe-4b2d-a37a-f6e819b18674]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45b5ded5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:6d:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427436, 'reachable_time': 38632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261931, 'error': None, 'target': 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay-49ec333bfd4112dc92bf72f65133daf2a93dbcccb2bb3c6cdf4e392317502a9e-merged.mount: Deactivated successfully.
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.009 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a32b970a-4308-43e6-b00e-c7125f9dfaa4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:6d7b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427436, 'tstamp': 427436}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261940, 'error': None, 'target': 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:40 np0005603435 podman[261729]: 2026-01-31 04:53:40.018024024 +0000 UTC m=+1.012303634 container remove 37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.027 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e610be-7127-41d4-9767-4dce649d9725]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45b5ded5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:6d:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427436, 'reachable_time': 38632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261944, 'error': None, 'target': 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:40 np0005603435 systemd[1]: libpod-conmon-37b1c0291c1f6e44ec4e62f1c5b4752fc3ef59bf0899e89aa2abb1c3eb897f4a.scope: Deactivated successfully.
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.049 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ef6646-aa13-4135-8955-7442e7dbb267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.073 239942 DEBUG nova.network.neutron [req-387b594d-aa20-4985-ac99-ac9c0ead8f75 req-dd59605e-15a8-4da0-b7cc-f3c2f7f01a57 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Updated VIF entry in instance network info cache for port 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.074 239942 DEBUG nova.network.neutron [req-387b594d-aa20-4985-ac99-ac9c0ead8f75 req-dd59605e-15a8-4da0-b7cc-f3c2f7f01a57 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Updating instance_info_cache with network_info: [{"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.079 239942 DEBUG nova.compute.manager [req-7ce0afe2-251c-4610-9570-0a19467e9d54 req-efa47c60-e2ea-4eca-8769-db622e4fc378 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received event network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.080 239942 DEBUG oslo_concurrency.lockutils [req-7ce0afe2-251c-4610-9570-0a19467e9d54 req-efa47c60-e2ea-4eca-8769-db622e4fc378 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.080 239942 DEBUG oslo_concurrency.lockutils [req-7ce0afe2-251c-4610-9570-0a19467e9d54 req-efa47c60-e2ea-4eca-8769-db622e4fc378 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.080 239942 DEBUG oslo_concurrency.lockutils [req-7ce0afe2-251c-4610-9570-0a19467e9d54 req-efa47c60-e2ea-4eca-8769-db622e4fc378 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.080 239942 DEBUG nova.compute.manager [req-7ce0afe2-251c-4610-9570-0a19467e9d54 req-efa47c60-e2ea-4eca-8769-db622e4fc378 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Processing event network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:53:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:53:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:53:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.095 239942 DEBUG oslo_concurrency.lockutils [req-387b594d-aa20-4985-ac99-ac9c0ead8f75 req-dd59605e-15a8-4da0-b7cc-f3c2f7f01a57 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.103 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[aed683ef-91b2-4767-972e-3f4f677aae2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.105 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45b5ded5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.106 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.107 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45b5ded5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.109 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:40 np0005603435 kernel: tap45b5ded5-50: entered promiscuous mode
Jan 30 23:53:40 np0005603435 NetworkManager[49097]: <info>  [1769835220.1116] manager: (tap45b5ded5-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.115 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45b5ded5-50, col_values=(('external_ids', {'iface-id': '3f9b28f1-1e76-45d9-9277-3ccd8b8d89cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.117 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.117 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:40 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:40Z|00144|binding|INFO|Releasing lport 3f9b28f1-1e76-45d9-9277-3ccd8b8d89cf from this chassis (sb_readonly=0)
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.119 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45b5ded5-5fe4-488c-aa97-cad6ca9b361e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45b5ded5-5fe4-488c-aa97-cad6ca9b361e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.123 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f39a2eb7-8d04-4656-bd03-81e77030d37d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.124 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-45b5ded5-5fe4-488c-aa97-cad6ca9b361e
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/45b5ded5-5fe4-488c-aa97-cad6ca9b361e.pid.haproxy
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 45b5ded5-5fe4-488c-aa97-cad6ca9b361e
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:53:40 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:40.127 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'env', 'PROCESS_TAG=haproxy-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45b5ded5-5fe4-488c-aa97-cad6ca9b361e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.127 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:40 np0005603435 podman[262001]: 2026-01-31 04:53:40.427742273 +0000 UTC m=+0.054679093 container create df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 30 23:53:40 np0005603435 systemd[1]: Started libpod-conmon-df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c.scope.
Jan 30 23:53:40 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:40 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:53:40 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:53:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e615e0469bcc86dbffa8adc08fdbad2aa41e74ea2043a19a1dd2fd2665a32374/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:40 np0005603435 podman[262001]: 2026-01-31 04:53:40.488057594 +0000 UTC m=+0.114994504 container init df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 30 23:53:40 np0005603435 podman[262001]: 2026-01-31 04:53:40.397097001 +0000 UTC m=+0.024033911 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:53:40 np0005603435 podman[262001]: 2026-01-31 04:53:40.494836721 +0000 UTC m=+0.121773571 container start df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:53:40 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[262052]: [NOTICE]   (262061) : New worker (262063) forked
Jan 30 23:53:40 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[262052]: [NOTICE]   (262061) : Loading success.
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.571 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.573 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835220.5708232, 961014c5-246e-4bd6-b7e8-86d49599034a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.573 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] VM Started (Lifecycle Event)#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.577 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.581 239942 INFO nova.virt.libvirt.driver [-] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Instance spawned successfully.#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.581 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.721 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.727 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.728 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.728 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.728 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.729 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.729 239942 DEBUG nova.virt.libvirt.driver [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.733 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.773 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.774 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835220.5724351, 961014c5-246e-4bd6-b7e8-86d49599034a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.775 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.802 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.807 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835220.5763698, 961014c5-246e-4bd6-b7e8-86d49599034a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.807 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.811 239942 INFO nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Took 7.22 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.812 239942 DEBUG nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.824 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.829 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.856 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.893 239942 INFO nova.compute.manager [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Took 8.30 seconds to build instance.#033[00m
Jan 30 23:53:40 np0005603435 nova_compute[239938]: 2026-01-31 04:53:40.911 239942 DEBUG oslo_concurrency.lockutils [None req-4598a1e3-58f0-4f14-bc7a-64dda552ad4c d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 144 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 89 op/s
Jan 30 23:53:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Jan 30 23:53:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Jan 30 23:53:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Jan 30 23:53:42 np0005603435 nova_compute[239938]: 2026-01-31 04:53:42.155 239942 DEBUG nova.compute.manager [req-5571a418-aff4-4884-8cf1-a37e1c0bb264 req-59e02a59-9795-4098-bffc-0bfc185db5d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received event network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:42 np0005603435 nova_compute[239938]: 2026-01-31 04:53:42.156 239942 DEBUG oslo_concurrency.lockutils [req-5571a418-aff4-4884-8cf1-a37e1c0bb264 req-59e02a59-9795-4098-bffc-0bfc185db5d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:42 np0005603435 nova_compute[239938]: 2026-01-31 04:53:42.156 239942 DEBUG oslo_concurrency.lockutils [req-5571a418-aff4-4884-8cf1-a37e1c0bb264 req-59e02a59-9795-4098-bffc-0bfc185db5d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:42 np0005603435 nova_compute[239938]: 2026-01-31 04:53:42.157 239942 DEBUG oslo_concurrency.lockutils [req-5571a418-aff4-4884-8cf1-a37e1c0bb264 req-59e02a59-9795-4098-bffc-0bfc185db5d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:42 np0005603435 nova_compute[239938]: 2026-01-31 04:53:42.157 239942 DEBUG nova.compute.manager [req-5571a418-aff4-4884-8cf1-a37e1c0bb264 req-59e02a59-9795-4098-bffc-0bfc185db5d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] No waiting events found dispatching network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:42 np0005603435 nova_compute[239938]: 2026-01-31 04:53:42.157 239942 WARNING nova.compute.manager [req-5571a418-aff4-4884-8cf1-a37e1c0bb264 req-59e02a59-9795-4098-bffc-0bfc185db5d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received unexpected event network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:53:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 181 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.3 MiB/s wr, 155 op/s
Jan 30 23:53:43 np0005603435 nova_compute[239938]: 2026-01-31 04:53:43.573 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835208.5714035, 9a44a647-eae8-41f0-b96c-aa172ac4757a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:43 np0005603435 nova_compute[239938]: 2026-01-31 04:53:43.574 239942 INFO nova.compute.manager [-] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:53:43 np0005603435 nova_compute[239938]: 2026-01-31 04:53:43.602 239942 DEBUG nova.compute.manager [None req-c3df376c-79e7-4c8a-acd1-4636c9181f48 - - - - - -] [instance: 9a44a647-eae8-41f0-b96c-aa172ac4757a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:43 np0005603435 nova_compute[239938]: 2026-01-31 04:53:43.924 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:44 np0005603435 nova_compute[239938]: 2026-01-31 04:53:44.276 239942 DEBUG nova.compute.manager [req-f80f25fb-c9fc-4da0-a3dd-ac52d308ce79 req-236d9468-af04-4d16-a9f3-166083d9662b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received event network-changed-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:44 np0005603435 nova_compute[239938]: 2026-01-31 04:53:44.276 239942 DEBUG nova.compute.manager [req-f80f25fb-c9fc-4da0-a3dd-ac52d308ce79 req-236d9468-af04-4d16-a9f3-166083d9662b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Refreshing instance network info cache due to event network-changed-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:53:44 np0005603435 nova_compute[239938]: 2026-01-31 04:53:44.277 239942 DEBUG oslo_concurrency.lockutils [req-f80f25fb-c9fc-4da0-a3dd-ac52d308ce79 req-236d9468-af04-4d16-a9f3-166083d9662b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:44 np0005603435 nova_compute[239938]: 2026-01-31 04:53:44.278 239942 DEBUG oslo_concurrency.lockutils [req-f80f25fb-c9fc-4da0-a3dd-ac52d308ce79 req-236d9468-af04-4d16-a9f3-166083d9662b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:44 np0005603435 nova_compute[239938]: 2026-01-31 04:53:44.278 239942 DEBUG nova.network.neutron [req-f80f25fb-c9fc-4da0-a3dd-ac52d308ce79 req-236d9468-af04-4d16-a9f3-166083d9662b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Refreshing network info cache for port 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:53:44 np0005603435 nova_compute[239938]: 2026-01-31 04:53:44.924 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 181 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.4 MiB/s wr, 159 op/s
Jan 30 23:53:45 np0005603435 nova_compute[239938]: 2026-01-31 04:53:45.604 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:45 np0005603435 nova_compute[239938]: 2026-01-31 04:53:45.605 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:45 np0005603435 nova_compute[239938]: 2026-01-31 04:53:45.635 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:53:45 np0005603435 nova_compute[239938]: 2026-01-31 04:53:45.705 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:45 np0005603435 nova_compute[239938]: 2026-01-31 04:53:45.706 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:45 np0005603435 nova_compute[239938]: 2026-01-31 04:53:45.714 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:53:45 np0005603435 nova_compute[239938]: 2026-01-31 04:53:45.715 239942 INFO nova.compute.claims [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:53:45 np0005603435 nova_compute[239938]: 2026-01-31 04:53:45.848 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.214 239942 DEBUG nova.network.neutron [req-f80f25fb-c9fc-4da0-a3dd-ac52d308ce79 req-236d9468-af04-4d16-a9f3-166083d9662b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Updated VIF entry in instance network info cache for port 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.216 239942 DEBUG nova.network.neutron [req-f80f25fb-c9fc-4da0-a3dd-ac52d308ce79 req-236d9468-af04-4d16-a9f3-166083d9662b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Updating instance_info_cache with network_info: [{"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.239 239942 DEBUG oslo_concurrency.lockutils [req-f80f25fb-c9fc-4da0-a3dd-ac52d308ce79 req-236d9468-af04-4d16-a9f3-166083d9662b c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-961014c5-246e-4bd6-b7e8-86d49599034a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3101693073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.420 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.427 239942 DEBUG nova.compute.provider_tree [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.443 239942 DEBUG nova.scheduler.client.report [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.464 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.465 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.514 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.515 239942 DEBUG nova.network.neutron [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.543 239942 INFO nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.566 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:53:46 np0005603435 nova_compute[239938]: 2026-01-31 04:53:46.612 239942 INFO nova.virt.block_device [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Booting with volume snapshot 312185cb-dd5d-44d3-86f3-9535a6a86a75 at /dev/vda#033[00m
Jan 30 23:53:47 np0005603435 nova_compute[239938]: 2026-01-31 04:53:47.024 239942 DEBUG nova.policy [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e10f13b98624406985dec6a5dcc391c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:53:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 208 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Jan 30 23:53:47 np0005603435 nova_compute[239938]: 2026-01-31 04:53:47.589 239942 DEBUG nova.network.neutron [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Successfully created port: 44ce2302-14a4-4b06-b787-868c0ecda641 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:53:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.251 239942 DEBUG nova.network.neutron [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Successfully updated port: 44ce2302-14a4-4b06-b787-868c0ecda641 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.285 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "refresh_cache-3565cd51-2733-4486-a756-d28b4f47377e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.286 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquired lock "refresh_cache-3565cd51-2733-4486-a756-d28b4f47377e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.287 239942 DEBUG nova.network.neutron [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.355 239942 DEBUG nova.compute.manager [req-571fcbdd-cf5e-49d3-a39e-452063f607e9 req-e6f5f38b-f4ec-48db-adb0-0b7cead6d0ad c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received event network-changed-44ce2302-14a4-4b06-b787-868c0ecda641 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.356 239942 DEBUG nova.compute.manager [req-571fcbdd-cf5e-49d3-a39e-452063f607e9 req-e6f5f38b-f4ec-48db-adb0-0b7cead6d0ad c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Refreshing instance network info cache due to event network-changed-44ce2302-14a4-4b06-b787-868c0ecda641. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.356 239942 DEBUG oslo_concurrency.lockutils [req-571fcbdd-cf5e-49d3-a39e-452063f607e9 req-e6f5f38b-f4ec-48db-adb0-0b7cead6d0ad c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-3565cd51-2733-4486-a756-d28b4f47377e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.435 239942 DEBUG nova.network.neutron [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:53:48 np0005603435 nova_compute[239938]: 2026-01-31 04:53:48.927 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 208 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Jan 30 23:53:49 np0005603435 nova_compute[239938]: 2026-01-31 04:53:49.927 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.478 239942 DEBUG nova.network.neutron [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Updating instance_info_cache with network_info: [{"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.505 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Releasing lock "refresh_cache-3565cd51-2733-4486-a756-d28b4f47377e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.505 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Instance network_info: |[{"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.506 239942 DEBUG oslo_concurrency.lockutils [req-571fcbdd-cf5e-49d3-a39e-452063f607e9 req-e6f5f38b-f4ec-48db-adb0-0b7cead6d0ad c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-3565cd51-2733-4486-a756-d28b4f47377e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.507 239942 DEBUG nova.network.neutron [req-571fcbdd-cf5e-49d3-a39e-452063f607e9 req-e6f5f38b-f4ec-48db-adb0-0b7cead6d0ad c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Refreshing network info cache for port 44ce2302-14a4-4b06-b787-868c0ecda641 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.575 239942 DEBUG os_brick.utils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.576 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.589 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.589 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff3d1b4-5a27-4589-b399-48cb16403390]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.592 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.602 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.602 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[9946e969-5dfb-40d4-979a-d7e2688a3553]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.604 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.614 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.615 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d11085-114b-491f-ae42-9ba7013bda21]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.617 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[31ef0128-fe7a-467a-a8bd-fe5d31fb6597]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.618 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.642 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.645 239942 DEBUG os_brick.initiator.connectors.lightos [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.646 239942 DEBUG os_brick.initiator.connectors.lightos [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.646 239942 DEBUG os_brick.initiator.connectors.lightos [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.647 239942 DEBUG os_brick.utils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:53:50 np0005603435 nova_compute[239938]: 2026-01-31 04:53:50.647 239942 DEBUG nova.virt.block_device [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Updating existing volume attachment record: 829e9cb4-49fd-4a43-bb1d-f3bb3c491939 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:53:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581589072' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 258 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 9.3 MiB/s wr, 182 op/s
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.615 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.617 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.637 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.698 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.701 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.702 239942 INFO nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Creating image(s)#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.703 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.703 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Ensure instance console log exists: /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.704 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.705 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.706 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.710 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Start _get_guest_xml network_info=[{"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'attachment_id': '829e9cb4-49fd-4a43-bb1d-f3bb3c491939', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b4cf56fa-2adc-4c8b-983a-1e0ead94f401', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b4cf56fa-2adc-4c8b-983a-1e0ead94f401', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '3565cd51-2733-4486-a756-d28b4f47377e', 'attached_at': '', 'detached_at': '', 'volume_id': 'b4cf56fa-2adc-4c8b-983a-1e0ead94f401', 'serial': 'b4cf56fa-2adc-4c8b-983a-1e0ead94f401'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.721 239942 WARNING nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.726 239942 DEBUG nova.virt.libvirt.host [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.727 239942 DEBUG nova.virt.libvirt.host [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.731 239942 DEBUG nova.virt.libvirt.host [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.731 239942 DEBUG nova.virt.libvirt.host [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.732 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.732 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.732 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.733 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.733 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.733 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.733 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.734 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.734 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.734 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.735 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.735 239942 DEBUG nova.virt.hardware [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.760 239942 DEBUG nova.storage.rbd_utils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 3565cd51-2733-4486-a756-d28b4f47377e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.764 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.782 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.783 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.791 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.791 239942 INFO nova.compute.claims [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:53:51 np0005603435 nova_compute[239938]: 2026-01-31 04:53:51.922 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192306142' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.293 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.322 239942 DEBUG nova.virt.libvirt.vif [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:53:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-94658018',display_name='tempest-TestVolumeBootPattern-server-94658018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-94658018',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-nx1aaliu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:53:46Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=3565cd51-2733-4486-a756-d28b4f47377e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.322 239942 DEBUG nova.network.os_vif_util [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.324 239942 DEBUG nova.network.os_vif_util [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:15:00,bridge_name='br-int',has_traffic_filtering=True,id=44ce2302-14a4-4b06-b787-868c0ecda641,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ce2302-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.326 239942 DEBUG nova.objects.instance [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3565cd51-2733-4486-a756-d28b4f47377e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.335 239942 DEBUG nova.network.neutron [req-571fcbdd-cf5e-49d3-a39e-452063f607e9 req-e6f5f38b-f4ec-48db-adb0-0b7cead6d0ad c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Updated VIF entry in instance network info cache for port 44ce2302-14a4-4b06-b787-868c0ecda641. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.336 239942 DEBUG nova.network.neutron [req-571fcbdd-cf5e-49d3-a39e-452063f607e9 req-e6f5f38b-f4ec-48db-adb0-0b7cead6d0ad c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Updating instance_info_cache with network_info: [{"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.342 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <uuid>3565cd51-2733-4486-a756-d28b4f47377e</uuid>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <name>instance-0000000f</name>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestVolumeBootPattern-server-94658018</nova:name>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:53:51</nova:creationTime>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <nova:user uuid="e10f13b98624406985dec6a5dcc391c7">tempest-TestVolumeBootPattern-1782423025-project-member</nova:user>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <nova:project uuid="e332802dd6cf49c59f8ed38e70addb0e">tempest-TestVolumeBootPattern-1782423025</nova:project>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <nova:port uuid="44ce2302-14a4-4b06-b787-868c0ecda641">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <entry name="serial">3565cd51-2733-4486-a756-d28b4f47377e</entry>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <entry name="uuid">3565cd51-2733-4486-a756-d28b4f47377e</entry>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/3565cd51-2733-4486-a756-d28b4f47377e_disk.config">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-b4cf56fa-2adc-4c8b-983a-1e0ead94f401">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <serial>b4cf56fa-2adc-4c8b-983a-1e0ead94f401</serial>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:90:15:00"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <target dev="tap44ce2302-14"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e/console.log" append="off"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:53:52 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:53:52 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:53:52 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:53:52 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.351 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Preparing to wait for external event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.351 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.352 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.352 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.354 239942 DEBUG nova.virt.libvirt.vif [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:53:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-94658018',display_name='tempest-TestVolumeBootPattern-server-94658018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-94658018',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-nx1aaliu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:53:46Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=3565cd51-2733-4486-a756-d28b4f47377e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.355 239942 DEBUG nova.network.os_vif_util [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.356 239942 DEBUG nova.network.os_vif_util [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:15:00,bridge_name='br-int',has_traffic_filtering=True,id=44ce2302-14a4-4b06-b787-868c0ecda641,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ce2302-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.357 239942 DEBUG os_vif [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:15:00,bridge_name='br-int',has_traffic_filtering=True,id=44ce2302-14a4-4b06-b787-868c0ecda641,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ce2302-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.358 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.359 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.360 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.362 239942 DEBUG oslo_concurrency.lockutils [req-571fcbdd-cf5e-49d3-a39e-452063f607e9 req-e6f5f38b-f4ec-48db-adb0-0b7cead6d0ad c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-3565cd51-2733-4486-a756-d28b4f47377e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.365 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.366 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44ce2302-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.367 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44ce2302-14, col_values=(('external_ids', {'iface-id': '44ce2302-14a4-4b06-b787-868c0ecda641', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:15:00', 'vm-uuid': '3565cd51-2733-4486-a756-d28b4f47377e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.369 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:52 np0005603435 NetworkManager[49097]: <info>  [1769835232.3709] manager: (tap44ce2302-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.374 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.376 239942 INFO os_vif [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:15:00,bridge_name='br-int',has_traffic_filtering=True,id=44ce2302-14a4-4b06-b787-868c0ecda641,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ce2302-14')#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.430 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.430 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.431 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No VIF found with MAC fa:16:3e:90:15:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.431 239942 INFO nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Using config drive#033[00m
Jan 30 23:53:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883648772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.452 239942 DEBUG nova.storage.rbd_utils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 3565cd51-2733-4486-a756-d28b4f47377e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.465 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.480 239942 DEBUG nova.compute.provider_tree [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.495 239942 DEBUG nova.scheduler.client.report [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.514 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.515 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.566 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.567 239942 DEBUG nova.network.neutron [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.594 239942 INFO nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.618 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.666 239942 INFO nova.virt.block_device [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Booting with volume 4f228222-15a8-4d83-9c16-585b710e0685 at /dev/vda#033[00m
Jan 30 23:53:52 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:52Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:94:43 10.100.0.10
Jan 30 23:53:52 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:52Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:94:43 10.100.0.10
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.777 239942 DEBUG os_brick.utils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.778 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.785 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.785 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[a5db3f8c-0e67-4064-8b27-f81407e19e28]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.786 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.791 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.791 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdc7930-7241-4087-9269-08cf3da70c55]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.792 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.797 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.797 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0389dc-b6d6-4d4a-bc6e-c1adaddde0a7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.799 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[910edd91-031e-4cff-a402-da8ead7134d0]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.799 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.816 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.818 239942 DEBUG os_brick.initiator.connectors.lightos [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.818 239942 DEBUG os_brick.initiator.connectors.lightos [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.818 239942 DEBUG os_brick.initiator.connectors.lightos [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.819 239942 DEBUG os_brick.utils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] <== get_connector_properties: return (41ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:53:52 np0005603435 nova_compute[239938]: 2026-01-31 04:53:52.819 239942 DEBUG nova.virt.block_device [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Updating existing volume attachment record: 7392abf3-cd1e-4c3e-8679-8f16819d3c85 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:53:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.330 239942 INFO nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Creating config drive at /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e/disk.config#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.336 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5i5046tv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.368 239942 DEBUG nova.policy [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '27f1a6fb472c4c5fa2286d0fa48dca34', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b39f0e168b54a4b8f976894d21361e6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:53:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 314 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 11 MiB/s wr, 174 op/s
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.464 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5i5046tv" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.495 239942 DEBUG nova.storage.rbd_utils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 3565cd51-2733-4486-a756-d28b4f47377e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.501 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e/disk.config 3565cd51-2733-4486-a756-d28b4f47377e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.653 239942 DEBUG oslo_concurrency.processutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e/disk.config 3565cd51-2733-4486-a756-d28b4f47377e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.654 239942 INFO nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Deleting local config drive /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e/disk.config because it was imported into RBD.#033[00m
Jan 30 23:53:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3042051970' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:53 np0005603435 kernel: tap44ce2302-14: entered promiscuous mode
Jan 30 23:53:53 np0005603435 NetworkManager[49097]: <info>  [1769835233.7118] manager: (tap44ce2302-14): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Jan 30 23:53:53 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:53Z|00145|binding|INFO|Claiming lport 44ce2302-14a4-4b06-b787-868c0ecda641 for this chassis.
Jan 30 23:53:53 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:53Z|00146|binding|INFO|44ce2302-14a4-4b06-b787-868c0ecda641: Claiming fa:16:3e:90:15:00 10.100.0.12
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.713 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.721 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:15:00 10.100.0.12'], port_security=['fa:16:3e:90:15:00 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3565cd51-2733-4486-a756-d28b4f47377e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e4b6ff09-e0ac-4b5c-a1ae-e4cd0ac951bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=44ce2302-14a4-4b06-b787-868c0ecda641) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.724 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 44ce2302-14a4-4b06-b787-868c0ecda641 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 bound to our chassis#033[00m
Jan 30 23:53:53 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:53Z|00147|binding|INFO|Setting lport 44ce2302-14a4-4b06-b787-868c0ecda641 ovn-installed in OVS
Jan 30 23:53:53 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:53Z|00148|binding|INFO|Setting lport 44ce2302-14a4-4b06-b787-868c0ecda641 up in Southbound
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.726 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.727 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.730 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.740 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8abb4b7d-a16d-43f5-a644-164f0f83fc9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.741 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b0cf2db-21 in ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.745 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b0cf2db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.745 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ab766082-edb1-4abe-90ab-dbd6b3131b25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.747 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[24f0a8e3-c93a-4135-8d1b-85e548161c7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 systemd-machined[208030]: New machine qemu-15-instance-0000000f.
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.761 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[2396c625-32e3-46b4-a959-285fe0bb69d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.777 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6f5ca7-4919-48bc-aef2-e6daa83bb8bc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 systemd-udevd[262250]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:53:53 np0005603435 NetworkManager[49097]: <info>  [1769835233.8044] device (tap44ce2302-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:53:53 np0005603435 NetworkManager[49097]: <info>  [1769835233.8051] device (tap44ce2302-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.812 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[20b897dd-512a-4354-9357-e1b38b78be03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 systemd-udevd[262254]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.819 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6913085a-480b-4cc6-9a3b-0c859ccdfc89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 NetworkManager[49097]: <info>  [1769835233.8207] manager: (tap5b0cf2db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/85)
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.852 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[4645fbf2-4ab0-4d11-97e2-5f70737b17eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.857 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[7a751165-c78b-4a37-8355-47bf5096ab36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 NetworkManager[49097]: <info>  [1769835233.8773] device (tap5b0cf2db-20): carrier: link connected
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.882 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[ea810ea8-e5c9-4a7a-bbb1-beb97eaea548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.897 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[87415e7d-3bed-46fd-8002-7a473b4e3ab5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428827, 'reachable_time': 34489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262280, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.911 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3fb9c8-718f-4f54-a572-479777d41393]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:f719'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428827, 'tstamp': 428827}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262281, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.930 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[78e73b88-2325-443b-810c-a8d64a8f3ba0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428827, 'reachable_time': 34489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262282, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:53.958 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[61a13184-8e30-4e8e-87ea-03542915beb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.958 239942 DEBUG nova.compute.manager [req-876210c6-6dad-4611-9917-77f262ea89a5 req-27328bcd-429d-4608-97ab-6637c21f386f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.959 239942 DEBUG oslo_concurrency.lockutils [req-876210c6-6dad-4611-9917-77f262ea89a5 req-27328bcd-429d-4608-97ab-6637c21f386f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.959 239942 DEBUG oslo_concurrency.lockutils [req-876210c6-6dad-4611-9917-77f262ea89a5 req-27328bcd-429d-4608-97ab-6637c21f386f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.960 239942 DEBUG oslo_concurrency.lockutils [req-876210c6-6dad-4611-9917-77f262ea89a5 req-27328bcd-429d-4608-97ab-6637c21f386f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:53 np0005603435 nova_compute[239938]: 2026-01-31 04:53:53.960 239942 DEBUG nova.compute.manager [req-876210c6-6dad-4611-9917-77f262ea89a5 req-27328bcd-429d-4608-97ab-6637c21f386f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Processing event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.015 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[04e1be75-447f-400b-a3d4-c1d83c20ac40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.016 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.017 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.018 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:54 np0005603435 kernel: tap5b0cf2db-20: entered promiscuous mode
Jan 30 23:53:54 np0005603435 NetworkManager[49097]: <info>  [1769835234.0212] manager: (tap5b0cf2db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.023 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:54 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:54Z|00149|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.036 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.037 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[28d61d09-5ae0-4013-9789-4bb6c9ac4c06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.038 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.037 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:54.039 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'env', 'PROCESS_TAG=haproxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.110 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.114 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.115 239942 INFO nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Creating image(s)#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.116 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.117 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Ensure instance console log exists: /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.117 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.118 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.118 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.192 239942 DEBUG nova.network.neutron [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Successfully created port: a032608c-fd47-442f-a668-0d122437d8c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.232 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835234.2322657, 3565cd51-2733-4486-a756-d28b4f47377e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.233 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] VM Started (Lifecycle Event)#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.235 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.238 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.242 239942 INFO nova.virt.libvirt.driver [-] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Instance spawned successfully.#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.242 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.248 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.251 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.263 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.263 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.263 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.264 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.264 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.264 239942 DEBUG nova.virt.libvirt.driver [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.269 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.269 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835234.233318, 3565cd51-2733-4486-a756-d28b4f47377e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.269 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.295 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.297 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835234.2378263, 3565cd51-2733-4486-a756-d28b4f47377e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.297 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.319 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.321 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:53:54 np0005603435 podman[262356]: 2026-01-31 04:53:54.479107969 +0000 UTC m=+0.066347550 container create bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:53:54 np0005603435 systemd[1]: Started libpod-conmon-bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b.scope.
Jan 30 23:53:54 np0005603435 podman[262356]: 2026-01-31 04:53:54.44612927 +0000 UTC m=+0.033368901 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:53:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0157a68a78c37c26f8f1b726b37e19e43f9d7055187a48b612cdfed9d6c38d9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:54 np0005603435 podman[262356]: 2026-01-31 04:53:54.584024305 +0000 UTC m=+0.171263926 container init bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:53:54 np0005603435 podman[262356]: 2026-01-31 04:53:54.591078588 +0000 UTC m=+0.178318159 container start bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:53:54 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[262373]: [NOTICE]   (262377) : New worker (262379) forked
Jan 30 23:53:54 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[262373]: [NOTICE]   (262377) : Loading success.
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.823 239942 INFO nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Took 3.12 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.824 239942 DEBUG nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.847 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.921 239942 INFO nova.compute.manager [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Took 9.24 seconds to build instance.#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.930 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:54 np0005603435 nova_compute[239938]: 2026-01-31 04:53:54.952 239942 DEBUG oslo_concurrency.lockutils [None req-330c1e92-0383-407d-acfb-f94b7c8996d2 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 323 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 11 MiB/s wr, 151 op/s
Jan 30 23:53:55 np0005603435 nova_compute[239938]: 2026-01-31 04:53:55.602 239942 DEBUG nova.network.neutron [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Successfully updated port: a032608c-fd47-442f-a668-0d122437d8c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:53:55 np0005603435 nova_compute[239938]: 2026-01-31 04:53:55.617 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:55 np0005603435 nova_compute[239938]: 2026-01-31 04:53:55.617 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquired lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:55 np0005603435 nova_compute[239938]: 2026-01-31 04:53:55.618 239942 DEBUG nova.network.neutron [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:53:55 np0005603435 nova_compute[239938]: 2026-01-31 04:53:55.685 239942 DEBUG nova.compute.manager [req-413f590e-b064-4335-bf55-1efeb73bafb4 req-1fd2f103-51d1-459a-9b68-d216b7aa160f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received event network-changed-a032608c-fd47-442f-a668-0d122437d8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:55 np0005603435 nova_compute[239938]: 2026-01-31 04:53:55.686 239942 DEBUG nova.compute.manager [req-413f590e-b064-4335-bf55-1efeb73bafb4 req-1fd2f103-51d1-459a-9b68-d216b7aa160f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Refreshing instance network info cache due to event network-changed-a032608c-fd47-442f-a668-0d122437d8c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:53:55 np0005603435 nova_compute[239938]: 2026-01-31 04:53:55.686 239942 DEBUG oslo_concurrency.lockutils [req-413f590e-b064-4335-bf55-1efeb73bafb4 req-1fd2f103-51d1-459a-9b68-d216b7aa160f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:53:55 np0005603435 nova_compute[239938]: 2026-01-31 04:53:55.746 239942 DEBUG nova.network.neutron [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:53:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:55.919 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:55.919 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:55.920 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.041 239942 DEBUG nova.compute.manager [req-4b485396-0b42-4025-9955-cbdfea9747e2 req-b1d76966-5c93-42a7-b54f-eda91286ba2a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.041 239942 DEBUG oslo_concurrency.lockutils [req-4b485396-0b42-4025-9955-cbdfea9747e2 req-b1d76966-5c93-42a7-b54f-eda91286ba2a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.042 239942 DEBUG oslo_concurrency.lockutils [req-4b485396-0b42-4025-9955-cbdfea9747e2 req-b1d76966-5c93-42a7-b54f-eda91286ba2a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.042 239942 DEBUG oslo_concurrency.lockutils [req-4b485396-0b42-4025-9955-cbdfea9747e2 req-b1d76966-5c93-42a7-b54f-eda91286ba2a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.043 239942 DEBUG nova.compute.manager [req-4b485396-0b42-4025-9955-cbdfea9747e2 req-b1d76966-5c93-42a7-b54f-eda91286ba2a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] No waiting events found dispatching network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.043 239942 WARNING nova.compute.manager [req-4b485396-0b42-4025-9955-cbdfea9747e2 req-b1d76966-5c93-42a7-b54f-eda91286ba2a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received unexpected event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.734 239942 DEBUG nova.network.neutron [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Updating instance_info_cache with network_info: [{"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.754 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Releasing lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.755 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Instance network_info: |[{"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.755 239942 DEBUG oslo_concurrency.lockutils [req-413f590e-b064-4335-bf55-1efeb73bafb4 req-1fd2f103-51d1-459a-9b68-d216b7aa160f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.756 239942 DEBUG nova.network.neutron [req-413f590e-b064-4335-bf55-1efeb73bafb4 req-1fd2f103-51d1-459a-9b68-d216b7aa160f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Refreshing network info cache for port a032608c-fd47-442f-a668-0d122437d8c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.762 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Start _get_guest_xml network_info=[{"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '7392abf3-cd1e-4c3e-8679-8f16819d3c85', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4f228222-15a8-4d83-9c16-585b710e0685', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4f228222-15a8-4d83-9c16-585b710e0685', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '2437d98a-1c5d-4451-bf32-cb4bb2d82a82', 'attached_at': '', 'detached_at': '', 'volume_id': '4f228222-15a8-4d83-9c16-585b710e0685', 'serial': '4f228222-15a8-4d83-9c16-585b710e0685'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.766 239942 WARNING nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.777 239942 DEBUG nova.virt.libvirt.host [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.778 239942 DEBUG nova.virt.libvirt.host [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.784 239942 DEBUG nova.virt.libvirt.host [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.784 239942 DEBUG nova.virt.libvirt.host [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.785 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.785 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.786 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.787 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.787 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.787 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.788 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.788 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.789 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.789 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.789 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.790 239942 DEBUG nova.virt.hardware [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.822 239942 DEBUG nova.storage.rbd_utils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 2437d98a-1c5d-4451-bf32-cb4bb2d82a82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.831 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.881 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.881 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.882 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.882 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.883 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.885 239942 INFO nova.compute.manager [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Terminating instance#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.887 239942 DEBUG nova.compute.manager [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:53:56 np0005603435 kernel: tap44ce2302-14 (unregistering): left promiscuous mode
Jan 30 23:53:56 np0005603435 NetworkManager[49097]: <info>  [1769835236.9318] device (tap44ce2302-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:53:56 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:56Z|00150|binding|INFO|Releasing lport 44ce2302-14a4-4b06-b787-868c0ecda641 from this chassis (sb_readonly=0)
Jan 30 23:53:56 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:56Z|00151|binding|INFO|Setting lport 44ce2302-14a4-4b06-b787-868c0ecda641 down in Southbound
Jan 30 23:53:56 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:56Z|00152|binding|INFO|Removing iface tap44ce2302-14 ovn-installed in OVS
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.945 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:56 np0005603435 nova_compute[239938]: 2026-01-31 04:53:56.952 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:56.952 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:15:00 10.100.0.12'], port_security=['fa:16:3e:90:15:00 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3565cd51-2733-4486-a756-d28b4f47377e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e4b6ff09-e0ac-4b5c-a1ae-e4cd0ac951bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=44ce2302-14a4-4b06-b787-868c0ecda641) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:56.958 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 44ce2302-14a4-4b06-b787-868c0ecda641 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:53:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:56.963 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:53:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:56.964 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[846f6b0e-682f-4c6f-baea-24c79ce0a690]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:56 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:56.965 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace which is not needed anymore#033[00m
Jan 30 23:53:56 np0005603435 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 30 23:53:56 np0005603435 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 3.359s CPU time.
Jan 30 23:53:56 np0005603435 systemd-machined[208030]: Machine qemu-15-instance-0000000f terminated.
Jan 30 23:53:57 np0005603435 podman[262408]: 2026-01-31 04:53:57.032307173 +0000 UTC m=+0.077196956 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 30 23:53:57 np0005603435 podman[262411]: 2026-01-31 04:53:57.072831938 +0000 UTC m=+0.109543821 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:53:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[262373]: [NOTICE]   (262377) : haproxy version is 2.8.14-c23fe91
Jan 30 23:53:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[262373]: [NOTICE]   (262377) : path to executable is /usr/sbin/haproxy
Jan 30 23:53:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[262373]: [WARNING]  (262377) : Exiting Master process...
Jan 30 23:53:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[262373]: [ALERT]    (262377) : Current worker (262379) exited with code 143 (Terminated)
Jan 30 23:53:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[262373]: [WARNING]  (262377) : All workers exited. Exiting... (0)
Jan 30 23:53:57 np0005603435 systemd[1]: libpod-bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b.scope: Deactivated successfully.
Jan 30 23:53:57 np0005603435 podman[262486]: 2026-01-31 04:53:57.08594482 +0000 UTC m=+0.045515649 container died bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 30 23:53:57 np0005603435 kernel: tap44ce2302-14: entered promiscuous mode
Jan 30 23:53:57 np0005603435 NetworkManager[49097]: <info>  [1769835237.1047] manager: (tap44ce2302-14): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.105 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00153|binding|INFO|Claiming lport 44ce2302-14a4-4b06-b787-868c0ecda641 for this chassis.
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00154|binding|INFO|44ce2302-14a4-4b06-b787-868c0ecda641: Claiming fa:16:3e:90:15:00 10.100.0.12
Jan 30 23:53:57 np0005603435 kernel: tap44ce2302-14 (unregistering): left promiscuous mode
Jan 30 23:53:57 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b-userdata-shm.mount: Deactivated successfully.
Jan 30 23:53:57 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0157a68a78c37c26f8f1b726b37e19e43f9d7055187a48b612cdfed9d6c38d9e-merged.mount: Deactivated successfully.
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.113 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:15:00 10.100.0.12'], port_security=['fa:16:3e:90:15:00 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3565cd51-2733-4486-a756-d28b4f47377e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e4b6ff09-e0ac-4b5c-a1ae-e4cd0ac951bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=44ce2302-14a4-4b06-b787-868c0ecda641) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.126 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00155|binding|INFO|Setting lport 44ce2302-14a4-4b06-b787-868c0ecda641 ovn-installed in OVS
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00156|binding|INFO|Setting lport 44ce2302-14a4-4b06-b787-868c0ecda641 up in Southbound
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00157|binding|INFO|Releasing lport 44ce2302-14a4-4b06-b787-868c0ecda641 from this chassis (sb_readonly=1)
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00158|if_status|INFO|Dropped 2 log messages in last 92 seconds (most recently, 92 seconds ago) due to excessive rate
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00159|if_status|INFO|Not setting lport 44ce2302-14a4-4b06-b787-868c0ecda641 down as sb is readonly
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00160|binding|INFO|Removing iface tap44ce2302-14 ovn-installed in OVS
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.130 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.130 239942 INFO nova.virt.libvirt.driver [-] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Instance destroyed successfully.#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.131 239942 DEBUG nova.objects.instance [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'resources' on Instance uuid 3565cd51-2733-4486-a756-d28b4f47377e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00161|binding|INFO|Releasing lport 44ce2302-14a4-4b06-b787-868c0ecda641 from this chassis (sb_readonly=0)
Jan 30 23:53:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:57Z|00162|binding|INFO|Setting lport 44ce2302-14a4-4b06-b787-868c0ecda641 down in Southbound
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.133 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.138 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:15:00 10.100.0.12'], port_security=['fa:16:3e:90:15:00 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3565cd51-2733-4486-a756-d28b4f47377e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e4b6ff09-e0ac-4b5c-a1ae-e4cd0ac951bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=44ce2302-14a4-4b06-b787-868c0ecda641) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:57 np0005603435 podman[262486]: 2026-01-31 04:53:57.139783752 +0000 UTC m=+0.099354591 container cleanup bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.144 239942 DEBUG nova.virt.libvirt.vif [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:53:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-94658018',display_name='tempest-TestVolumeBootPattern-server-94658018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-94658018',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:53:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-nx1aaliu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:53:54Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=3565cd51-2733-4486-a756-d28b4f47377e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.144 239942 DEBUG nova.network.os_vif_util [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "44ce2302-14a4-4b06-b787-868c0ecda641", "address": "fa:16:3e:90:15:00", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44ce2302-14", "ovs_interfaceid": "44ce2302-14a4-4b06-b787-868c0ecda641", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.145 239942 DEBUG nova.network.os_vif_util [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:15:00,bridge_name='br-int',has_traffic_filtering=True,id=44ce2302-14a4-4b06-b787-868c0ecda641,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ce2302-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.146 239942 DEBUG os_vif [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:15:00,bridge_name='br-int',has_traffic_filtering=True,id=44ce2302-14a4-4b06-b787-868c0ecda641,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ce2302-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.148 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.148 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44ce2302-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.150 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.154 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:53:57 np0005603435 systemd[1]: libpod-conmon-bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b.scope: Deactivated successfully.
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.156 239942 INFO os_vif [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:15:00,bridge_name='br-int',has_traffic_filtering=True,id=44ce2302-14a4-4b06-b787-868c0ecda641,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44ce2302-14')#033[00m
Jan 30 23:53:57 np0005603435 podman[262522]: 2026-01-31 04:53:57.210835756 +0000 UTC m=+0.050767667 container remove bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.214 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8f623ae7-3ce3-4da0-8ab3-4df61c1b263b]: (4, ('Sat Jan 31 04:53:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b)\nbdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b\nSat Jan 31 04:53:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (bdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b)\nbdb0c00e574ea52027989b8927d7c2f394bcd33c67619e6a799aa05ab12fe92b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.216 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[919c752c-6c87-465e-bf2e-83a8ef740dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.217 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.219 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:57 np0005603435 kernel: tap5b0cf2db-20: left promiscuous mode
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.226 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.228 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[209ee4bb-d5db-4b65-b16d-d8799479d046]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.243 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2af14d13-8f20-42cd-8da2-b2cc4fc33b54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.244 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3469b5-9567-4075-bdfe-ff6e80559844]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.257 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[db3189ac-5e61-4354-9767-593b9a208618]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428820, 'reachable_time': 22874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262552, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 systemd[1]: run-netns-ovnmeta\x2d5b0cf2db\x2d2e35\x2d41fa\x2d9783\x2d30f0fe6ea7a3.mount: Deactivated successfully.
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.261 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.261 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[46a3ba07-5680-4b0d-9080-a64420d1dcea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.261 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 44ce2302-14a4-4b06-b787-868c0ecda641 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.263 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.264 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[643803a1-840e-4b46-85b8-4ec5941ebbc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.264 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 44ce2302-14a4-4b06-b787-868c0ecda641 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.266 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:53:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:57.266 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e974c008-96e0-4aa4-aeae-3e94b1524788]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.290 239942 INFO nova.virt.libvirt.driver [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Deleting instance files /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e_del#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.290 239942 INFO nova.virt.libvirt.driver [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Deletion of /var/lib/nova/instances/3565cd51-2733-4486-a756-d28b4f47377e_del complete#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.339 239942 INFO nova.compute.manager [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.340 239942 DEBUG oslo.service.loopingcall [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.340 239942 DEBUG nova.compute.manager [-] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.340 239942 DEBUG nova.network.neutron [-] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:53:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 327 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 11 MiB/s wr, 194 op/s
Jan 30 23:53:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:53:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441050438' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.487 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.629 239942 DEBUG os_brick.encryptors [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Using volume encryption metadata '{'encryption_key_id': '1ed50af6-b97b-4be5-b2f5-5093462e5d3c', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4f228222-15a8-4d83-9c16-585b710e0685', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4f228222-15a8-4d83-9c16-585b710e0685', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '2437d98a-1c5d-4451-bf32-cb4bb2d82a82', 'attached_at': '', 'detached_at': '', 'volume_id': '4f228222-15a8-4d83-9c16-585b710e0685', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.631 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.647 239942 DEBUG barbicanclient.v1.secrets [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.648 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.679 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.679 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.698 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.699 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.726 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.726 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.753 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.754 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.778 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.779 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.812 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.813 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.841 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.842 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.865 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.866 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.895 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.895 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.952 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.953 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.985 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:57 np0005603435 nova_compute[239938]: 2026-01-31 04:53:57.986 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.007 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.008 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.027 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.028 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.044 239942 DEBUG nova.network.neutron [-] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.055 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.056 239942 INFO barbicanclient.base [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1ed50af6-b97b-4be5-b2f5-5093462e5d3c#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.065 239942 INFO nova.compute.manager [-] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Took 0.73 seconds to deallocate network for instance.#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.079 239942 DEBUG barbicanclient.client [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.080 239942 DEBUG nova.virt.libvirt.host [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <volume>4f228222-15a8-4d83-9c16-585b710e0685</volume>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  </usage>
Jan 30 23:53:58 np0005603435 nova_compute[239938]: </secret>
Jan 30 23:53:58 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.111 239942 DEBUG nova.virt.libvirt.vif [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-177511905',display_name='tempest-TransferEncryptedVolumeTest-server-177511905',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-177511905',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB828SO4KCiS/c6FYV17F5UX+BLYIRAc4CyTZA4fXDNG/eieZI8ChuIejzpTuF2CfgKMQEbMYMZVWf9xnEOSXNVsZsXIi11a3wsxGw0mmNb26j9vmggnToYyQthSze7emg==',key_name='tempest-TransferEncryptedVolumeTest-938095670',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-sq77q9js',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:53:52Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=2437d98a-1c5d-4451-bf32-cb4bb2d82a82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.112 239942 DEBUG nova.network.os_vif_util [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.113 239942 DEBUG nova.network.os_vif_util [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:62:e7,bridge_name='br-int',has_traffic_filtering=True,id=a032608c-fd47-442f-a668-0d122437d8c8,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa032608c-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.116 239942 DEBUG nova.objects.instance [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2437d98a-1c5d-4451-bf32-cb4bb2d82a82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.133 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <uuid>2437d98a-1c5d-4451-bf32-cb4bb2d82a82</uuid>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <name>instance-00000010</name>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-177511905</nova:name>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:53:56</nova:creationTime>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <nova:user uuid="27f1a6fb472c4c5fa2286d0fa48dca34">tempest-TransferEncryptedVolumeTest-483286292-project-member</nova:user>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <nova:project uuid="9b39f0e168b54a4b8f976894d21361e6">tempest-TransferEncryptedVolumeTest-483286292</nova:project>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <nova:port uuid="a032608c-fd47-442f-a668-0d122437d8c8">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <entry name="serial">2437d98a-1c5d-4451-bf32-cb4bb2d82a82</entry>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <entry name="uuid">2437d98a-1c5d-4451-bf32-cb4bb2d82a82</entry>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/2437d98a-1c5d-4451-bf32-cb4bb2d82a82_disk.config">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-4f228222-15a8-4d83-9c16-585b710e0685">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <serial>4f228222-15a8-4d83-9c16-585b710e0685</serial>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <encryption format="luks">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:        <secret type="passphrase" uuid="f11980a1-80fc-4f31-a750-f636e48250e0"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      </encryption>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:c8:62:e7"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <target dev="tapa032608c-fd"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82/console.log" append="off"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:53:58 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:53:58 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:53:58 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:53:58 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.134 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Preparing to wait for external event network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.134 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.135 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.135 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.136 239942 DEBUG nova.virt.libvirt.vif [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-177511905',display_name='tempest-TransferEncryptedVolumeTest-server-177511905',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-177511905',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB828SO4KCiS/c6FYV17F5UX+BLYIRAc4CyTZA4fXDNG/eieZI8ChuIejzpTuF2CfgKMQEbMYMZVWf9xnEOSXNVsZsXIi11a3wsxGw0mmNb26j9vmggnToYyQthSze7emg==',key_name='tempest-TransferEncryptedVolumeTest-938095670',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-sq77q9js',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:53:52Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=2437d98a-1c5d-4451-bf32-cb4bb2d82a82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.137 239942 DEBUG nova.network.os_vif_util [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.138 239942 DEBUG nova.network.os_vif_util [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:62:e7,bridge_name='br-int',has_traffic_filtering=True,id=a032608c-fd47-442f-a668-0d122437d8c8,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa032608c-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.139 239942 DEBUG os_vif [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:62:e7,bridge_name='br-int',has_traffic_filtering=True,id=a032608c-fd47-442f-a668-0d122437d8c8,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa032608c-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.139 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.140 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.140 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.144 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.145 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa032608c-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.145 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa032608c-fd, col_values=(('external_ids', {'iface-id': 'a032608c-fd47-442f-a668-0d122437d8c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:62:e7', 'vm-uuid': '2437d98a-1c5d-4451-bf32-cb4bb2d82a82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.148 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:58 np0005603435 NetworkManager[49097]: <info>  [1769835238.1507] manager: (tapa032608c-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.151 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.154 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.156 239942 INFO os_vif [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:62:e7,bridge_name='br-int',has_traffic_filtering=True,id=a032608c-fd47-442f-a668-0d122437d8c8,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa032608c-fd')#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.177 239942 DEBUG nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received event network-vif-unplugged-44ce2302-14a4-4b06-b787-868c0ecda641 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.178 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.178 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.179 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.179 239942 DEBUG nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] No waiting events found dispatching network-vif-unplugged-44ce2302-14a4-4b06-b787-868c0ecda641 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.180 239942 DEBUG nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received event network-vif-unplugged-44ce2302-14a4-4b06-b787-868c0ecda641 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.180 239942 DEBUG nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.180 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.181 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.181 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.181 239942 DEBUG nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] No waiting events found dispatching network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.182 239942 WARNING nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received unexpected event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 for instance with vm_state active and task_state deleting.#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.182 239942 DEBUG nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.183 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "3565cd51-2733-4486-a756-d28b4f47377e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.183 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.183 239942 DEBUG oslo_concurrency.lockutils [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.183 239942 DEBUG nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] No waiting events found dispatching network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.184 239942 WARNING nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received unexpected event network-vif-plugged-44ce2302-14a4-4b06-b787-868c0ecda641 for instance with vm_state active and task_state deleting.#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.184 239942 DEBUG nova.compute.manager [req-ee14b7bc-ad82-412d-8a0f-824eb9160580 req-0adaf960-6210-4fb8-b27d-ef8135e316ff c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Received event network-vif-deleted-44ce2302-14a4-4b06-b787-868c0ecda641 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.224 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.224 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.225 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No VIF found with MAC fa:16:3e:c8:62:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.225 239942 INFO nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Using config drive#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.249 239942 DEBUG nova.storage.rbd_utils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 2437d98a-1c5d-4451-bf32-cb4bb2d82a82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.292 239942 INFO nova.compute.manager [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.294 239942 DEBUG nova.compute.manager [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Deleting volume: b4cf56fa-2adc-4c8b-983a-1e0ead94f401 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.466 239942 DEBUG nova.network.neutron [req-413f590e-b064-4335-bf55-1efeb73bafb4 req-1fd2f103-51d1-459a-9b68-d216b7aa160f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Updated VIF entry in instance network info cache for port a032608c-fd47-442f-a668-0d122437d8c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.466 239942 DEBUG nova.network.neutron [req-413f590e-b064-4335-bf55-1efeb73bafb4 req-1fd2f103-51d1-459a-9b68-d216b7aa160f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Updating instance_info_cache with network_info: [{"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.484 239942 DEBUG oslo_concurrency.lockutils [req-413f590e-b064-4335-bf55-1efeb73bafb4 req-1fd2f103-51d1-459a-9b68-d216b7aa160f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.725 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.725 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.804 239942 INFO nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Creating config drive at /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82/disk.config#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.811 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp57py_5es execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.870 239942 DEBUG oslo_concurrency.processutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.943 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp57py_5es" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.967 239942 DEBUG nova.storage.rbd_utils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 2437d98a-1c5d-4451-bf32-cb4bb2d82a82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:53:58 np0005603435 nova_compute[239938]: 2026-01-31 04:53:58.971 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82/disk.config 2437d98a-1c5d-4451-bf32-cb4bb2d82a82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.127 239942 DEBUG oslo_concurrency.processutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82/disk.config 2437d98a-1c5d-4451-bf32-cb4bb2d82a82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.128 239942 INFO nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Deleting local config drive /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82/disk.config because it was imported into RBD.#033[00m
Jan 30 23:53:59 np0005603435 NetworkManager[49097]: <info>  [1769835239.1730] manager: (tapa032608c-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Jan 30 23:53:59 np0005603435 kernel: tapa032608c-fd: entered promiscuous mode
Jan 30 23:53:59 np0005603435 systemd-udevd[262430]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:53:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:53:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1202003892' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:53:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:59Z|00163|binding|INFO|Claiming lport a032608c-fd47-442f-a668-0d122437d8c8 for this chassis.
Jan 30 23:53:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:59Z|00164|binding|INFO|a032608c-fd47-442f-a668-0d122437d8c8: Claiming fa:16:3e:c8:62:e7 10.100.0.4
Jan 30 23:53:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.177 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1202003892' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.184 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:62:e7 10.100.0.4'], port_security=['fa:16:3e:c8:62:e7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2437d98a-1c5d-4451-bf32-cb4bb2d82a82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a10d9666-b672-4619-83b7-22dc781b5b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b39f0e168b54a4b8f976894d21361e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff571068-2221-49e0-84fe-8c4b85bf5ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21f14c68-4084-427c-b05e-592b1db029c6, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=a032608c-fd47-442f-a668-0d122437d8c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.185 156017 INFO neutron.agent.ovn.metadata.agent [-] Port a032608c-fd47-442f-a668-0d122437d8c8 in datapath a10d9666-b672-4619-83b7-22dc781b5b5b bound to our chassis#033[00m
Jan 30 23:53:59 np0005603435 NetworkManager[49097]: <info>  [1769835239.1863] device (tapa032608c-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.187 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a10d9666-b672-4619-83b7-22dc781b5b5b#033[00m
Jan 30 23:53:59 np0005603435 NetworkManager[49097]: <info>  [1769835239.1876] device (tapa032608c-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:53:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:59Z|00165|binding|INFO|Setting lport a032608c-fd47-442f-a668-0d122437d8c8 ovn-installed in OVS
Jan 30 23:53:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:59Z|00166|binding|INFO|Setting lport a032608c-fd47-442f-a668-0d122437d8c8 up in Southbound
Jan 30 23:53:59 np0005603435 systemd-machined[208030]: New machine qemu-16-instance-00000010.
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.198 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.200 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.203 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8264de-37b2-4bf5-92c4-70deee2fb978]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.203 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa10d9666-b1 in ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.205 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa10d9666-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.205 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d02d05ac-9aab-4725-93eb-91539f03b7e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.207 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[382c4b2d-8b93-45bc-80e9-b5ff763e2ccd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.218 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1829ba-b09b-408a-a568-ddea5d030471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.239 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[45ae3a35-7a0b-41d1-a2fd-300510af058e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.278 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[6273dacc-f40f-40c0-a325-ff5b6378f16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 NetworkManager[49097]: <info>  [1769835239.2849] manager: (tapa10d9666-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.283 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[94dadc83-b4ab-4a6f-9542-0d517d52f6b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.325 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[f52e90eb-1c13-4109-8a17-df4790f3926e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.328 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a8872f3c-5806-4f8a-bf2e-ab4268b13719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.346 239942 DEBUG oslo_concurrency.lockutils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.347 239942 DEBUG oslo_concurrency.lockutils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:59 np0005603435 NetworkManager[49097]: <info>  [1769835239.3524] device (tapa10d9666-b0): carrier: link connected
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.358 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[91a28914-0482-4644-8ede-07060309a157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.365 239942 DEBUG nova.objects.instance [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'flavor' on Instance uuid 961014c5-246e-4bd6-b7e8-86d49599034a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.370 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[38947e52-6b44-43a6-9600-ac05e7ea846f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa10d9666-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c0:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429375, 'reachable_time': 28948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262680, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.383 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4c92bf-e251-44c5-9a10-3b964d209ba1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:c0da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429375, 'tstamp': 429375}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262681, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 327 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 9.2 MiB/s wr, 172 op/s
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.394 239942 INFO nova.virt.libvirt.driver [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Ignoring supplied device name: /dev/vdb#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.398 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[99b60191-9422-4d83-ad22-8a16f373bb34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa10d9666-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c0:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429375, 'reachable_time': 28948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262682, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.410 239942 DEBUG oslo_concurrency.lockutils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:53:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3596057027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.429 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9de4ffbd-f6b3-4935-8cf8-e4f10ca77407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.437 239942 DEBUG oslo_concurrency.processutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.444 239942 DEBUG nova.compute.provider_tree [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.461 239942 DEBUG nova.scheduler.client.report [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.484 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.484 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[740c9c5a-c81b-4066-bd3a-4696bfddbf89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.486 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa10d9666-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.487 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.487 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa10d9666-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.490 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 kernel: tapa10d9666-b0: entered promiscuous mode
Jan 30 23:53:59 np0005603435 NetworkManager[49097]: <info>  [1769835239.4912] manager: (tapa10d9666-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.493 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.495 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa10d9666-b0, col_values=(('external_ids', {'iface-id': 'b5040674-bbd1-4dc9-b2e1-14712cb60315'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.497 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:53:59Z|00167|binding|INFO|Releasing lport b5040674-bbd1-4dc9-b2e1-14712cb60315 from this chassis (sb_readonly=0)
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.498 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.499 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.500 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bf04f37e-5e16-45db-b938-af60c8e08d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.501 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-a10d9666-b672-4619-83b7-22dc781b5b5b
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID a10d9666-b672-4619-83b7-22dc781b5b5b
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:53:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:53:59.502 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'env', 'PROCESS_TAG=haproxy-a10d9666-b672-4619-83b7-22dc781b5b5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a10d9666-b672-4619-83b7-22dc781b5b5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.504 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.511 239942 INFO nova.scheduler.client.report [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Deleted allocations for instance 3565cd51-2733-4486-a756-d28b4f47377e#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.577 239942 DEBUG oslo_concurrency.lockutils [None req-491fcd9e-9ec1-49f5-b8ba-08b79fc09490 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "3565cd51-2733-4486-a756-d28b4f47377e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.610 239942 DEBUG oslo_concurrency.lockutils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.610 239942 DEBUG oslo_concurrency.lockutils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.611 239942 INFO nova.compute.manager [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Attaching volume 77225370-5d50-49c5-9bd6-9de4b58fd2ca to /dev/vdb#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.737 239942 DEBUG os_brick.utils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.738 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.750 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.750 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb47db3-323c-4093-a1d9-c4c9f603465a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.751 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.759 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.759 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[712f9b75-8e06-4dc3-aecc-283800277ddf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.760 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.769 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.770 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[64f07e96-cae5-4436-bfaf-906ea1ffc87e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.771 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[5bee6ec2-859e-49b7-9b08-e06fdbe7759f]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.772 239942 DEBUG oslo_concurrency.processutils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.792 239942 DEBUG oslo_concurrency.processutils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.795 239942 DEBUG os_brick.initiator.connectors.lightos [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.796 239942 DEBUG os_brick.initiator.connectors.lightos [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.796 239942 DEBUG os_brick.initiator.connectors.lightos [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.797 239942 DEBUG os_brick.utils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.797 239942 DEBUG nova.virt.block_device [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Updating existing volume attachment record: 98d97da3-5aed-4def-acc0-065d9fd03398 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:53:59 np0005603435 podman[262759]: 2026-01-31 04:53:59.845505881 +0000 UTC m=+0.043861168 container create 5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:53:59 np0005603435 systemd[1]: Started libpod-conmon-5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34.scope.
Jan 30 23:53:59 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:53:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/504ba492119f03416ca09172c6e477e20842594499962ad5933b1b6a722ab49f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:53:59 np0005603435 podman[262759]: 2026-01-31 04:53:59.825329215 +0000 UTC m=+0.023684512 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:53:59 np0005603435 podman[262759]: 2026-01-31 04:53:59.927396021 +0000 UTC m=+0.125751318 container init 5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:53:59 np0005603435 nova_compute[239938]: 2026-01-31 04:53:59.931 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:53:59 np0005603435 podman[262759]: 2026-01-31 04:53:59.937505769 +0000 UTC m=+0.135861046 container start 5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:53:59 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[262775]: [NOTICE]   (262779) : New worker (262781) forked
Jan 30 23:53:59 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[262775]: [NOTICE]   (262779) : Loading success.
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.323 239942 DEBUG nova.compute.manager [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received event network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.324 239942 DEBUG oslo_concurrency.lockutils [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.324 239942 DEBUG oslo_concurrency.lockutils [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.325 239942 DEBUG oslo_concurrency.lockutils [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.325 239942 DEBUG nova.compute.manager [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Processing event network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.326 239942 DEBUG nova.compute.manager [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received event network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.326 239942 DEBUG oslo_concurrency.lockutils [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.327 239942 DEBUG oslo_concurrency.lockutils [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.327 239942 DEBUG oslo_concurrency.lockutils [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.327 239942 DEBUG nova.compute.manager [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] No waiting events found dispatching network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.328 239942 WARNING nova.compute.manager [req-88b47a21-1997-467e-bd0b-3800c8bb964e req-a97316c6-388e-4247-abd7-70785b0166ea c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received unexpected event network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 for instance with vm_state building and task_state spawning.#033[00m
Jan 30 23:54:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:54:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1492389387' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.796 239942 DEBUG nova.objects.instance [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'flavor' on Instance uuid 961014c5-246e-4bd6-b7e8-86d49599034a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.816 239942 DEBUG nova.virt.libvirt.driver [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Attempting to attach volume 77225370-5d50-49c5-9bd6-9de4b58fd2ca with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.820 239942 DEBUG nova.virt.libvirt.guest [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:54:00 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:54:00 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-77225370-5d50-49c5-9bd6-9de4b58fd2ca">
Jan 30 23:54:00 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:54:00 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:54:00 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:54:00 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:54:00 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:54:00 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:54:00 np0005603435 nova_compute[239938]:  <serial>77225370-5d50-49c5-9bd6-9de4b58fd2ca</serial>
Jan 30 23:54:00 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:54:00 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.958 239942 DEBUG nova.virt.libvirt.driver [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.958 239942 DEBUG nova.virt.libvirt.driver [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.959 239942 DEBUG nova.virt.libvirt.driver [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:54:00 np0005603435 nova_compute[239938]: 2026-01-31 04:54:00.959 239942 DEBUG nova.virt.libvirt.driver [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] No VIF found with MAC fa:16:3e:a9:94:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:54:01 np0005603435 nova_compute[239938]: 2026-01-31 04:54:01.208 239942 DEBUG oslo_concurrency.lockutils [None req-39f00d4c-fef5-4403-8a79-1ca51d6ace64 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 327 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 9.2 MiB/s wr, 190 op/s
Jan 30 23:54:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Jan 30 23:54:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Jan 30 23:54:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.034 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835242.033029, 2437d98a-1c5d-4451-bf32-cb4bb2d82a82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.034 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] VM Started (Lifecycle Event)#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.037 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.041 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.046 239942 INFO nova.virt.libvirt.driver [-] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Instance spawned successfully.#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.046 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.058 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.067 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.074 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.074 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.075 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.075 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.076 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.076 239942 DEBUG nova.virt.libvirt.driver [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.087 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.087 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835242.033193, 2437d98a-1c5d-4451-bf32-cb4bb2d82a82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.088 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.118 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.122 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835242.0403492, 2437d98a-1c5d-4451-bf32-cb4bb2d82a82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.122 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.134 239942 INFO nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Took 8.02 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.135 239942 DEBUG nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.144 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.147 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.174 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.209 239942 INFO nova.compute.manager [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Took 10.52 seconds to build instance.#033[00m
Jan 30 23:54:02 np0005603435 nova_compute[239938]: 2026-01-31 04:54:02.227 239942 DEBUG oslo_concurrency.lockutils [None req-4a3f9212-7f1b-465c-a3e3-9e45d08a401a 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Jan 30 23:54:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Jan 30 23:54:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Jan 30 23:54:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:03 np0005603435 nova_compute[239938]: 2026-01-31 04:54:03.150 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 327 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 101 KiB/s wr, 199 op/s
Jan 30 23:54:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:54:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2718182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:54:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:54:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2718182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:54:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Jan 30 23:54:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Jan 30 23:54:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Jan 30 23:54:04 np0005603435 nova_compute[239938]: 2026-01-31 04:54:04.964 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 318 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 50 KiB/s wr, 183 op/s
Jan 30 23:54:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Jan 30 23:54:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Jan 30 23:54:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Jan 30 23:54:05 np0005603435 nova_compute[239938]: 2026-01-31 04:54:05.697 239942 DEBUG nova.compute.manager [req-cfd5487f-abb0-406b-85c1-16386a50afb6 req-34e880b7-c2ce-45a2-a183-d62ee32b29e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received event network-changed-a032608c-fd47-442f-a668-0d122437d8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:05 np0005603435 nova_compute[239938]: 2026-01-31 04:54:05.698 239942 DEBUG nova.compute.manager [req-cfd5487f-abb0-406b-85c1-16386a50afb6 req-34e880b7-c2ce-45a2-a183-d62ee32b29e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Refreshing instance network info cache due to event network-changed-a032608c-fd47-442f-a668-0d122437d8c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:54:05 np0005603435 nova_compute[239938]: 2026-01-31 04:54:05.698 239942 DEBUG oslo_concurrency.lockutils [req-cfd5487f-abb0-406b-85c1-16386a50afb6 req-34e880b7-c2ce-45a2-a183-d62ee32b29e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:54:05 np0005603435 nova_compute[239938]: 2026-01-31 04:54:05.698 239942 DEBUG oslo_concurrency.lockutils [req-cfd5487f-abb0-406b-85c1-16386a50afb6 req-34e880b7-c2ce-45a2-a183-d62ee32b29e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:54:05 np0005603435 nova_compute[239938]: 2026-01-31 04:54:05.698 239942 DEBUG nova.network.neutron [req-cfd5487f-abb0-406b-85c1-16386a50afb6 req-34e880b7-c2ce-45a2-a183-d62ee32b29e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Refreshing network info cache for port a032608c-fd47-442f-a668-0d122437d8c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:54:06
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.meta', '.mgr']
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:54:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Jan 30 23:54:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Jan 30 23:54:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Jan 30 23:54:06 np0005603435 nova_compute[239938]: 2026-01-31 04:54:06.843 239942 DEBUG nova.network.neutron [req-cfd5487f-abb0-406b-85c1-16386a50afb6 req-34e880b7-c2ce-45a2-a183-d62ee32b29e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Updated VIF entry in instance network info cache for port a032608c-fd47-442f-a668-0d122437d8c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:54:06 np0005603435 nova_compute[239938]: 2026-01-31 04:54:06.843 239942 DEBUG nova.network.neutron [req-cfd5487f-abb0-406b-85c1-16386a50afb6 req-34e880b7-c2ce-45a2-a183-d62ee32b29e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Updating instance_info_cache with network_info: [{"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:06 np0005603435 nova_compute[239938]: 2026-01-31 04:54:06.863 239942 DEBUG oslo_concurrency.lockutils [req-cfd5487f-abb0-406b-85c1-16386a50afb6 req-34e880b7-c2ce-45a2-a183-d62ee32b29e4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-2437d98a-1c5d-4451-bf32-cb4bb2d82a82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:54:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:54:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 281 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 17 KiB/s wr, 316 op/s
Jan 30 23:54:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Jan 30 23:54:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Jan 30 23:54:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Jan 30 23:54:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:54:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:54:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:54:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:54:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:54:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Jan 30 23:54:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Jan 30 23:54:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Jan 30 23:54:08 np0005603435 nova_compute[239938]: 2026-01-31 04:54:08.152 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:54:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:54:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:54:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:54:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:54:08 np0005603435 nova_compute[239938]: 2026-01-31 04:54:08.825 239942 DEBUG oslo_concurrency.lockutils [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:08 np0005603435 nova_compute[239938]: 2026-01-31 04:54:08.826 239942 DEBUG oslo_concurrency.lockutils [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:08 np0005603435 nova_compute[239938]: 2026-01-31 04:54:08.841 239942 INFO nova.compute.manager [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Detaching volume 77225370-5d50-49c5-9bd6-9de4b58fd2ca#033[00m
Jan 30 23:54:08 np0005603435 nova_compute[239938]: 2026-01-31 04:54:08.932 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.018 239942 INFO nova.virt.block_device [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Attempting to driver detach volume 77225370-5d50-49c5-9bd6-9de4b58fd2ca from mountpoint /dev/vdb#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.029 239942 DEBUG nova.virt.libvirt.driver [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Attempting to detach device vdb from instance 961014c5-246e-4bd6-b7e8-86d49599034a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.030 239942 DEBUG nova.virt.libvirt.guest [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-77225370-5d50-49c5-9bd6-9de4b58fd2ca">
Jan 30 23:54:09 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <serial>77225370-5d50-49c5-9bd6-9de4b58fd2ca</serial>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:54:09 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:54:09 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.040 239942 INFO nova.virt.libvirt.driver [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Successfully detached device vdb from instance 961014c5-246e-4bd6-b7e8-86d49599034a from the persistent domain config.#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.041 239942 DEBUG nova.virt.libvirt.driver [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 961014c5-246e-4bd6-b7e8-86d49599034a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.041 239942 DEBUG nova.virt.libvirt.guest [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-77225370-5d50-49c5-9bd6-9de4b58fd2ca">
Jan 30 23:54:09 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <serial>77225370-5d50-49c5-9bd6-9de4b58fd2ca</serial>
Jan 30 23:54:09 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:54:09 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:54:09 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.147 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835249.1468039, 961014c5-246e-4bd6-b7e8-86d49599034a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.149 239942 DEBUG nova.virt.libvirt.driver [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 961014c5-246e-4bd6-b7e8-86d49599034a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.152 239942 INFO nova.virt.libvirt.driver [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Successfully detached device vdb from instance 961014c5-246e-4bd6-b7e8-86d49599034a from the live domain config.#033[00m
Jan 30 23:54:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 281 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 20 KiB/s wr, 284 op/s
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.405 239942 DEBUG nova.objects.instance [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'flavor' on Instance uuid 961014c5-246e-4bd6-b7e8-86d49599034a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.468 239942 DEBUG oslo_concurrency.lockutils [None req-f609f25e-a691-4b20-974f-4bb21a84826d d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.469 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.470 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.471 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.471 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.473 239942 INFO nova.compute.manager [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Terminating instance#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.476 239942 DEBUG nova.compute.manager [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:54:09 np0005603435 kernel: tap0dfbe40d-2b (unregistering): left promiscuous mode
Jan 30 23:54:09 np0005603435 NetworkManager[49097]: <info>  [1769835249.5701] device (tap0dfbe40d-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:54:09 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:09Z|00168|binding|INFO|Releasing lport 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 from this chassis (sb_readonly=0)
Jan 30 23:54:09 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:09Z|00169|binding|INFO|Setting lport 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 down in Southbound
Jan 30 23:54:09 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:09Z|00170|binding|INFO|Removing iface tap0dfbe40d-2b ovn-installed in OVS
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.584 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.586 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:09.593 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:94:43 10.100.0.10'], port_security=['fa:16:3e:a9:94:43 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '961014c5-246e-4bd6-b7e8-86d49599034a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5ce1f57546045d891de80fbaff2512b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd79a6d46-298c-47b1-928a-16b62ca8df21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa479721-2329-4784-af95-25b103421212, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:54:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:09.596 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 in datapath 45b5ded5-5fe4-488c-aa97-cad6ca9b361e unbound from our chassis#033[00m
Jan 30 23:54:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:09.601 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45b5ded5-5fe4-488c-aa97-cad6ca9b361e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:54:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:09.603 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5443e25f-bf2f-402d-bf71-2bdc3662c545]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:09.604 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e namespace which is not needed anymore#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.606 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:09 np0005603435 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 30 23:54:09 np0005603435 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 12.698s CPU time.
Jan 30 23:54:09 np0005603435 systemd-machined[208030]: Machine qemu-14-instance-0000000e terminated.
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.716 239942 INFO nova.virt.libvirt.driver [-] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Instance destroyed successfully.#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.717 239942 DEBUG nova.objects.instance [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lazy-loading 'resources' on Instance uuid 961014c5-246e-4bd6-b7e8-86d49599034a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.741 239942 DEBUG nova.virt.libvirt.vif [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:53:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1283462290',display_name='tempest-VolumesSnapshotTestJSON-instance-1283462290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1283462290',id=14,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOlEf2eEu1YgJYAKQ2o/udbNnsFo6lie3hHqiLJVuWBRQsmg3oD8c6k+QIGqtXaYo4wrW2uri+A3vSiljyf1HCUwxZlS+9pWO3GBxlWISzNrJl1vnewd8jiRr9epbAuQOQ==',key_name='tempest-keypair-1251668977',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:53:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5ce1f57546045d891de80fbaff2512b',ramdisk_id='',reservation_id='r-07m0n3f4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-541584434',owner_user_name='tempest-VolumesSnapshotTestJSON-541584434-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:53:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3612e26aca645d895f083e0d58dfd69',uuid=961014c5-246e-4bd6-b7e8-86d49599034a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.742 239942 DEBUG nova.network.os_vif_util [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converting VIF {"id": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "address": "fa:16:3e:a9:94:43", "network": {"id": "45b5ded5-5fe4-488c-aa97-cad6ca9b361e", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-149842489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ce1f57546045d891de80fbaff2512b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dfbe40d-2b", "ovs_interfaceid": "0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.744 239942 DEBUG nova.network.os_vif_util [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:94:43,bridge_name='br-int',has_traffic_filtering=True,id=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfbe40d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.745 239942 DEBUG os_vif [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:94:43,bridge_name='br-int',has_traffic_filtering=True,id=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfbe40d-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.749 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.749 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0dfbe40d-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.754 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.758 239942 DEBUG nova.compute.manager [req-39b20dae-64f8-454e-8e70-a76fb1490430 req-0071ec66-7175-459c-83b3-676a6a845430 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received event network-vif-unplugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.759 239942 DEBUG oslo_concurrency.lockutils [req-39b20dae-64f8-454e-8e70-a76fb1490430 req-0071ec66-7175-459c-83b3-676a6a845430 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.759 239942 DEBUG oslo_concurrency.lockutils [req-39b20dae-64f8-454e-8e70-a76fb1490430 req-0071ec66-7175-459c-83b3-676a6a845430 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:09 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.759 239942 DEBUG oslo_concurrency.lockutils [req-39b20dae-64f8-454e-8e70-a76fb1490430 req-0071ec66-7175-459c-83b3-676a6a845430 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.759 239942 DEBUG nova.compute.manager [req-39b20dae-64f8-454e-8e70-a76fb1490430 req-0071ec66-7175-459c-83b3-676a6a845430 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] No waiting events found dispatching network-vif-unplugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.760 239942 DEBUG nova.compute.manager [req-39b20dae-64f8-454e-8e70-a76fb1490430 req-0071ec66-7175-459c-83b3-676a6a845430 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received event network-vif-unplugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.760 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:54:09 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.764 239942 INFO os_vif [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:94:43,bridge_name='br-int',has_traffic_filtering=True,id=0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7,network=Network(45b5ded5-5fe4-488c-aa97-cad6ca9b361e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dfbe40d-2b')#033[00m
Jan 30 23:54:09 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[262052]: [NOTICE]   (262061) : haproxy version is 2.8.14-c23fe91
Jan 30 23:54:09 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[262052]: [NOTICE]   (262061) : path to executable is /usr/sbin/haproxy
Jan 30 23:54:09 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[262052]: [WARNING]  (262061) : Exiting Master process...
Jan 30 23:54:09 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[262052]: [ALERT]    (262061) : Current worker (262063) exited with code 143 (Terminated)
Jan 30 23:54:09 np0005603435 neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e[262052]: [WARNING]  (262061) : All workers exited. Exiting... (0)
Jan 30 23:54:09 np0005603435 systemd[1]: libpod-df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c.scope: Deactivated successfully.
Jan 30 23:54:09 np0005603435 podman[262845]: 2026-01-31 04:54:09.886355894 +0000 UTC m=+0.176455113 container died df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 30 23:54:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c-userdata-shm.mount: Deactivated successfully.
Jan 30 23:54:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e615e0469bcc86dbffa8adc08fdbad2aa41e74ea2043a19a1dd2fd2665a32374-merged.mount: Deactivated successfully.
Jan 30 23:54:09 np0005603435 podman[262845]: 2026-01-31 04:54:09.933793869 +0000 UTC m=+0.223893078 container cleanup df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:54:09 np0005603435 systemd[1]: libpod-conmon-df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c.scope: Deactivated successfully.
Jan 30 23:54:09 np0005603435 nova_compute[239938]: 2026-01-31 04:54:09.966 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:10 np0005603435 podman[262901]: 2026-01-31 04:54:10.015833863 +0000 UTC m=+0.056378795 container remove df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.025 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c39ee9a3-ccff-4781-9760-430aa8d6a0e5]: (4, ('Sat Jan 31 04:54:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e (df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c)\ndf82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c\nSat Jan 31 04:54:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e (df82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c)\ndf82f62e165f3cf7e45e9d7e249e7b7e29991d91c98f8277dc70376dcd6c572c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.027 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[30359099-bdf3-4920-a154-4d63e823de3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.029 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45b5ded5-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:10 np0005603435 kernel: tap45b5ded5-50: left promiscuous mode
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.030 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.041 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.044 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[58a74505-6f90-4d12-8cea-93c5ea5be168]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.058 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e632753c-032d-417e-96cf-c5fbdf7a9783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.059 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e546cd3a-1fda-4f70-9241-876cd5300af3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.071 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ff1a08-8012-459d-8277-a001aa1f4bc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427429, 'reachable_time': 29104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262917, 'error': None, 'target': 'ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.073 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45b5ded5-5fe4-488c-aa97-cad6ca9b361e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:54:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:10.073 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[462ef04a-b570-45af-9663-404408727060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:10 np0005603435 systemd[1]: run-netns-ovnmeta\x2d45b5ded5\x2d5fe4\x2d488c\x2daa97\x2dcad6ca9b361e.mount: Deactivated successfully.
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.128 239942 INFO nova.virt.libvirt.driver [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Deleting instance files /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a_del#033[00m
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.129 239942 INFO nova.virt.libvirt.driver [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Deletion of /var/lib/nova/instances/961014c5-246e-4bd6-b7e8-86d49599034a_del complete#033[00m
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.211 239942 INFO nova.compute.manager [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.211 239942 DEBUG oslo.service.loopingcall [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.212 239942 DEBUG nova.compute.manager [-] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.212 239942 DEBUG nova.network.neutron [-] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.911 239942 DEBUG nova.network.neutron [-] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:10 np0005603435 nova_compute[239938]: 2026-01-31 04:54:10.932 239942 INFO nova.compute.manager [-] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Took 0.72 seconds to deallocate network for instance.#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.068 239942 WARNING nova.volume.cinder [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Attachment 98d97da3-5aed-4def-acc0-065d9fd03398 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 98d97da3-5aed-4def-acc0-065d9fd03398. (HTTP 404) (Request-ID: req-a3dd51dd-c6df-4e1f-8591-83a73c739a2c)#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.069 239942 INFO nova.compute.manager [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Took 0.14 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.121 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.122 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.198 239942 DEBUG oslo_concurrency.processutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 255 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 15 KiB/s wr, 242 op/s
Jan 30 23:54:11 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 30 23:54:11 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 30 23:54:11 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 30 23:54:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:54:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025598344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.837 239942 DEBUG oslo_concurrency.processutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.842 239942 DEBUG nova.compute.provider_tree [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.846 239942 DEBUG nova.compute.manager [req-340df5fe-dc40-469c-b7b0-866307d6f2b9 req-71742862-769c-46c8-81d3-ff94aea6245a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received event network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.847 239942 DEBUG oslo_concurrency.lockutils [req-340df5fe-dc40-469c-b7b0-866307d6f2b9 req-71742862-769c-46c8-81d3-ff94aea6245a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.847 239942 DEBUG oslo_concurrency.lockutils [req-340df5fe-dc40-469c-b7b0-866307d6f2b9 req-71742862-769c-46c8-81d3-ff94aea6245a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.847 239942 DEBUG oslo_concurrency.lockutils [req-340df5fe-dc40-469c-b7b0-866307d6f2b9 req-71742862-769c-46c8-81d3-ff94aea6245a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.847 239942 DEBUG nova.compute.manager [req-340df5fe-dc40-469c-b7b0-866307d6f2b9 req-71742862-769c-46c8-81d3-ff94aea6245a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] No waiting events found dispatching network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.848 239942 WARNING nova.compute.manager [req-340df5fe-dc40-469c-b7b0-866307d6f2b9 req-71742862-769c-46c8-81d3-ff94aea6245a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received unexpected event network-vif-plugged-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.848 239942 DEBUG nova.compute.manager [req-340df5fe-dc40-469c-b7b0-866307d6f2b9 req-71742862-769c-46c8-81d3-ff94aea6245a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Received event network-vif-deleted-0dfbe40d-2b0a-48b3-b8e5-b9a6d9ec7dc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.857 239942 DEBUG nova.scheduler.client.report [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.875 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:11 np0005603435 nova_compute[239938]: 2026-01-31 04:54:11.925 239942 INFO nova.scheduler.client.report [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Deleted allocations for instance 961014c5-246e-4bd6-b7e8-86d49599034a#033[00m
Jan 30 23:54:12 np0005603435 nova_compute[239938]: 2026-01-31 04:54:12.001 239942 DEBUG oslo_concurrency.lockutils [None req-2e30fc46-3a5f-4c84-aac5-e06fedd17808 d3612e26aca645d895f083e0d58dfd69 f5ce1f57546045d891de80fbaff2512b - - default default] Lock "961014c5-246e-4bd6-b7e8-86d49599034a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:12 np0005603435 nova_compute[239938]: 2026-01-31 04:54:12.121 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835237.1205482, 3565cd51-2733-4486-a756-d28b4f47377e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:12 np0005603435 nova_compute[239938]: 2026-01-31 04:54:12.121 239942 INFO nova.compute.manager [-] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:54:12 np0005603435 nova_compute[239938]: 2026-01-31 04:54:12.144 239942 DEBUG nova.compute.manager [None req-5878cec7-b0a8-4edb-8e96-5bef81297a42 - - - - - -] [instance: 3565cd51-2733-4486-a756-d28b4f47377e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:12Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:62:e7 10.100.0.4
Jan 30 23:54:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:12Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:62:e7 10.100.0.4
Jan 30 23:54:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Jan 30 23:54:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Jan 30 23:54:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Jan 30 23:54:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 229 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 161 op/s
Jan 30 23:54:14 np0005603435 nova_compute[239938]: 2026-01-31 04:54:14.753 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:14 np0005603435 nova_compute[239938]: 2026-01-31 04:54:14.968 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Jan 30 23:54:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Jan 30 23:54:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Jan 30 23:54:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 261 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.1 MiB/s wr, 212 op/s
Jan 30 23:54:15 np0005603435 nova_compute[239938]: 2026-01-31 04:54:15.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:16 np0005603435 nova_compute[239938]: 2026-01-31 04:54:16.735 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "a7e679f6-843b-49b7-8455-d5ed363e1b37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:16 np0005603435 nova_compute[239938]: 2026-01-31 04:54:16.735 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:16 np0005603435 nova_compute[239938]: 2026-01-31 04:54:16.760 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:54:16 np0005603435 nova_compute[239938]: 2026-01-31 04:54:16.830 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:16 np0005603435 nova_compute[239938]: 2026-01-31 04:54:16.831 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:16 np0005603435 nova_compute[239938]: 2026-01-31 04:54:16.840 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:54:16 np0005603435 nova_compute[239938]: 2026-01-31 04:54:16.841 239942 INFO nova.compute.claims [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:54:16 np0005603435 nova_compute[239938]: 2026-01-31 04:54:16.977 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.912199300209697e-06 of space, bias 1.0, pg target 0.0023736597900629094 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0027405545915982574 of space, bias 1.0, pg target 0.8221663774794772 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.342924703896228e-07 of space, bias 1.0, pg target 0.00019028774111688683 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665364834466949 of space, bias 1.0, pg target 0.19996094503400846 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.863790975229003e-07 of space, bias 4.0, pg target 0.0008236549170274804 quantized to 16 (current 16)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:54:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 317 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 11 MiB/s wr, 253 op/s
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.510312) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835257510385, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2576, "num_deletes": 514, "total_data_size": 3459558, "memory_usage": 3510736, "flush_reason": "Manual Compaction"}
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4078177082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.585 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.590 239942 DEBUG nova.compute.provider_tree [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835257592800, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3155892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26426, "largest_seqno": 29001, "table_properties": {"data_size": 3144793, "index_size": 6697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 27528, "raw_average_key_size": 20, "raw_value_size": 3120203, "raw_average_value_size": 2365, "num_data_blocks": 293, "num_entries": 1319, "num_filter_entries": 1319, "num_deletions": 514, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769835087, "oldest_key_time": 1769835087, "file_creation_time": 1769835257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 82569 microseconds, and 7284 cpu microseconds.
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.609 239942 DEBUG nova.scheduler.client.report [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.592880) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3155892 bytes OK
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.592913) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.617087) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.617132) EVENT_LOG_v1 {"time_micros": 1769835257617122, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.617160) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3447564, prev total WAL file size 3447564, number of live WAL files 2.
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.618195) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3081KB)], [59(10MB)]
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835257618449, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 14024704, "oldest_snapshot_seqno": -1}
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.635 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.635 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.696 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.697 239942 DEBUG nova.network.neutron [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.732 239942 INFO nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.754 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5761 keys, 9165965 bytes, temperature: kUnknown
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835257780835, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9165965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9123625, "index_size": 26806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 144013, "raw_average_key_size": 24, "raw_value_size": 9016164, "raw_average_value_size": 1565, "num_data_blocks": 1087, "num_entries": 5761, "num_filter_entries": 5761, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769835257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.781512) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9165965 bytes
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.782868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.3 rd, 56.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.4 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(7.3) write-amplify(2.9) OK, records in: 6780, records dropped: 1019 output_compression: NoCompression
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.782905) EVENT_LOG_v1 {"time_micros": 1769835257782884, "job": 32, "event": "compaction_finished", "compaction_time_micros": 162464, "compaction_time_cpu_micros": 29391, "output_level": 6, "num_output_files": 1, "total_output_size": 9165965, "num_input_records": 6780, "num_output_records": 5761, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835257784010, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835257785948, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.618068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.786213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.786253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.786258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.786263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:54:17 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:54:17.786267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.801 239942 INFO nova.virt.block_device [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Booting with volume f9e8fb71-b06e-4c8d-914d-ae02de4b66fb at /dev/vda#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.955 239942 DEBUG nova.policy [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e10f13b98624406985dec6a5dcc391c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.966 239942 DEBUG os_brick.utils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.967 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.975 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.975 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b72e8b-e361-4a02-89cd-8cebf7d3ce2f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.977 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.981 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.982 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[bbae5bde-8840-4c4a-95b3-ad251f4a7856]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.983 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.990 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.990 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[f387aebb-93a7-43f1-93e8-795fbed3e672]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.991 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[14f74c9d-ea57-463c-bf46-e42378f92272]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:17 np0005603435 nova_compute[239938]: 2026-01-31 04:54:17.992 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.014 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.020 239942 DEBUG os_brick.initiator.connectors.lightos [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.021 239942 DEBUG os_brick.initiator.connectors.lightos [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.021 239942 DEBUG os_brick.initiator.connectors.lightos [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.022 239942 DEBUG os_brick.utils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.023 239942 DEBUG nova.virt.block_device [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updating existing volume attachment record: 038b6d59-60ce-43b0-95ac-974453fbd75f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3815426769' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3815426769' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:54:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2421694955' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:54:18 np0005603435 nova_compute[239938]: 2026-01-31 04:54:18.964 239942 DEBUG nova.network.neutron [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Successfully created port: 9bfb8d4f-c12b-4a91-950a-4519f14d6508 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.152 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.154 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.154 239942 INFO nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Creating image(s)#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.154 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.155 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Ensure instance console log exists: /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.155 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.155 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.155 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 317 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 889 KiB/s rd, 11 MiB/s wr, 166 op/s
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.622 239942 DEBUG nova.network.neutron [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Successfully updated port: 9bfb8d4f-c12b-4a91-950a-4519f14d6508 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.636 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.637 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquired lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.637 239942 DEBUG nova.network.neutron [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.697 239942 DEBUG nova.compute.manager [req-0e51a6f1-0325-4022-99e9-63b15b0ed8b6 req-f65e9ca2-381e-4f8b-976d-c2cb4c00bb4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received event network-changed-9bfb8d4f-c12b-4a91-950a-4519f14d6508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.698 239942 DEBUG nova.compute.manager [req-0e51a6f1-0325-4022-99e9-63b15b0ed8b6 req-f65e9ca2-381e-4f8b-976d-c2cb4c00bb4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Refreshing instance network info cache due to event network-changed-9bfb8d4f-c12b-4a91-950a-4519f14d6508. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.699 239942 DEBUG oslo_concurrency.lockutils [req-0e51a6f1-0325-4022-99e9-63b15b0ed8b6 req-f65e9ca2-381e-4f8b-976d-c2cb4c00bb4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.766 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.884 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.904 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:19 np0005603435 nova_compute[239938]: 2026-01-31 04:54:19.970 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:20 np0005603435 nova_compute[239938]: 2026-01-31 04:54:20.110 239942 DEBUG nova.network.neutron [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:54:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Jan 30 23:54:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Jan 30 23:54:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Jan 30 23:54:20 np0005603435 nova_compute[239938]: 2026-01-31 04:54:20.972 239942 DEBUG nova.network.neutron [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updating instance_info_cache with network_info: [{"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:20 np0005603435 nova_compute[239938]: 2026-01-31 04:54:20.998 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Releasing lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:54:20 np0005603435 nova_compute[239938]: 2026-01-31 04:54:20.999 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Instance network_info: |[{"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:54:20 np0005603435 nova_compute[239938]: 2026-01-31 04:54:20.999 239942 DEBUG oslo_concurrency.lockutils [req-0e51a6f1-0325-4022-99e9-63b15b0ed8b6 req-f65e9ca2-381e-4f8b-976d-c2cb4c00bb4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:54:20 np0005603435 nova_compute[239938]: 2026-01-31 04:54:20.999 239942 DEBUG nova.network.neutron [req-0e51a6f1-0325-4022-99e9-63b15b0ed8b6 req-f65e9ca2-381e-4f8b-976d-c2cb4c00bb4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Refreshing network info cache for port 9bfb8d4f-c12b-4a91-950a-4519f14d6508 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.002 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Start _get_guest_xml network_info=[{"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'attachment_id': '038b6d59-60ce-43b0-95ac-974453fbd75f', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f9e8fb71-b06e-4c8d-914d-ae02de4b66fb', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f9e8fb71-b06e-4c8d-914d-ae02de4b66fb', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a7e679f6-843b-49b7-8455-d5ed363e1b37', 'attached_at': '', 'detached_at': '', 'volume_id': 'f9e8fb71-b06e-4c8d-914d-ae02de4b66fb', 'serial': 'f9e8fb71-b06e-4c8d-914d-ae02de4b66fb'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.008 239942 WARNING nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.015 239942 DEBUG nova.virt.libvirt.host [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.015 239942 DEBUG nova.virt.libvirt.host [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.020 239942 DEBUG nova.virt.libvirt.host [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.021 239942 DEBUG nova.virt.libvirt.host [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.022 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.022 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.022 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.022 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.023 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.023 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.023 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.023 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.024 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.024 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.024 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.024 239942 DEBUG nova.virt.hardware [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.049 239942 DEBUG nova.storage.rbd_utils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image a7e679f6-843b-49b7-8455-d5ed363e1b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.053 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 317 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 909 KiB/s rd, 11 MiB/s wr, 171 op/s
Jan 30 23:54:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:54:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/218883894' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:54:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:54:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/218883894' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:54:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:54:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947710717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.613 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.616 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.617 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.617 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.617 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.617 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.619 239942 INFO nova.compute.manager [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Terminating instance#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.620 239942 DEBUG nova.compute.manager [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.639 239942 DEBUG nova.virt.libvirt.vif [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:54:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-834746693',display_name='tempest-TestVolumeBootPattern-volume-backed-server-834746693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-834746693',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKAFEDOLl5nmr38YCtZKugulPS1xzLW2VjEPQiweluSJcGVnuwSvDq1lDFjz/tr8fZOa+Jq6UErMuT+akiSqjrhbBgKwkIqglp//7KbJDiOMQLMS6MMZFzd797gJsRRj3Q==',key_name='tempest-keypair-112238935',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-sbezkyal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:54:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e10f13b98624406985dec6a5dcc391c7',uuid=a7e679f6-843b-49b7-8455-d5ed363e1b37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.639 239942 DEBUG nova.network.os_vif_util [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.640 239942 DEBUG nova.network.os_vif_util [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=9bfb8d4f-c12b-4a91-950a-4519f14d6508,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bfb8d4f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.641 239942 DEBUG nova.objects.instance [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid a7e679f6-843b-49b7-8455-d5ed363e1b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.653 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <uuid>a7e679f6-843b-49b7-8455-d5ed363e1b37</uuid>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <name>instance-00000011</name>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-834746693</nova:name>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:54:21</nova:creationTime>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <nova:user uuid="e10f13b98624406985dec6a5dcc391c7">tempest-TestVolumeBootPattern-1782423025-project-member</nova:user>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <nova:project uuid="e332802dd6cf49c59f8ed38e70addb0e">tempest-TestVolumeBootPattern-1782423025</nova:project>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <nova:port uuid="9bfb8d4f-c12b-4a91-950a-4519f14d6508">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <entry name="serial">a7e679f6-843b-49b7-8455-d5ed363e1b37</entry>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <entry name="uuid">a7e679f6-843b-49b7-8455-d5ed363e1b37</entry>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/a7e679f6-843b-49b7-8455-d5ed363e1b37_disk.config">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:54:21 np0005603435 kernel: tapa032608c-fd (unregistering): left promiscuous mode
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-f9e8fb71-b06e-4c8d-914d-ae02de4b66fb">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <serial>f9e8fb71-b06e-4c8d-914d-ae02de4b66fb</serial>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:c0:7f:92"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <target dev="tap9bfb8d4f-c1"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37/console.log" append="off"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:54:21 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:54:21 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:54:21 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:54:21 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.654 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Preparing to wait for external event network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.655 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.655 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.655 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.656 239942 DEBUG nova.virt.libvirt.vif [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:54:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-834746693',display_name='tempest-TestVolumeBootPattern-volume-backed-server-834746693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-834746693',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKAFEDOLl5nmr38YCtZKugulPS1xzLW2VjEPQiweluSJcGVnuwSvDq1lDFjz/tr8fZOa+Jq6UErMuT+akiSqjrhbBgKwkIqglp//7KbJDiOMQLMS6MMZFzd797gJsRRj3Q==',key_name='tempest-keypair-112238935',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-sbezkyal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:54:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e10f13b98624406985dec6a5dcc391c7',uuid=a7e679f6-843b-49b7-8455-d5ed363e1b37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.656 239942 DEBUG nova.network.os_vif_util [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.657 239942 DEBUG nova.network.os_vif_util [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=9bfb8d4f-c12b-4a91-950a-4519f14d6508,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bfb8d4f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.657 239942 DEBUG os_vif [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=9bfb8d4f-c12b-4a91-950a-4519f14d6508,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bfb8d4f-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.658 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.659 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.659 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:54:21 np0005603435 NetworkManager[49097]: <info>  [1769835261.6660] device (tapa032608c-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.667 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.668 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bfb8d4f-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.668 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9bfb8d4f-c1, col_values=(('external_ids', {'iface-id': '9bfb8d4f-c12b-4a91-950a-4519f14d6508', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:7f:92', 'vm-uuid': 'a7e679f6-843b-49b7-8455-d5ed363e1b37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.669 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.672 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 NetworkManager[49097]: <info>  [1769835261.6738] manager: (tap9bfb8d4f-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.675 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:54:21 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:21Z|00171|binding|INFO|Releasing lport a032608c-fd47-442f-a668-0d122437d8c8 from this chassis (sb_readonly=0)
Jan 30 23:54:21 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:21Z|00172|binding|INFO|Setting lport a032608c-fd47-442f-a668-0d122437d8c8 down in Southbound
Jan 30 23:54:21 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:21Z|00173|binding|INFO|Removing iface tapa032608c-fd ovn-installed in OVS
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.680 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.684 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.686 239942 INFO os_vif [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=9bfb8d4f-c12b-4a91-950a-4519f14d6508,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bfb8d4f-c1')#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.689 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:62:e7 10.100.0.4'], port_security=['fa:16:3e:c8:62:e7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2437d98a-1c5d-4451-bf32-cb4bb2d82a82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a10d9666-b672-4619-83b7-22dc781b5b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b39f0e168b54a4b8f976894d21361e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff571068-2221-49e0-84fe-8c4b85bf5ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21f14c68-4084-427c-b05e-592b1db029c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=a032608c-fd47-442f-a668-0d122437d8c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.690 156017 INFO neutron.agent.ovn.metadata.agent [-] Port a032608c-fd47-442f-a668-0d122437d8c8 in datapath a10d9666-b672-4619-83b7-22dc781b5b5b unbound from our chassis#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.692 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a10d9666-b672-4619-83b7-22dc781b5b5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.693 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb0e888-a06a-41db-b729-76674d2707ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.694 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b namespace which is not needed anymore#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.694 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 30 23:54:21 np0005603435 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 14.070s CPU time.
Jan 30 23:54:21 np0005603435 systemd-machined[208030]: Machine qemu-16-instance-00000010 terminated.
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.753 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.754 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.754 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No VIF found with MAC fa:16:3e:c0:7f:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.755 239942 INFO nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Using config drive#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.778 239942 DEBUG nova.storage.rbd_utils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image a7e679f6-843b-49b7-8455-d5ed363e1b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:54:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[262775]: [NOTICE]   (262779) : haproxy version is 2.8.14-c23fe91
Jan 30 23:54:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[262775]: [NOTICE]   (262779) : path to executable is /usr/sbin/haproxy
Jan 30 23:54:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[262775]: [WARNING]  (262779) : Exiting Master process...
Jan 30 23:54:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[262775]: [ALERT]    (262779) : Current worker (262781) exited with code 143 (Terminated)
Jan 30 23:54:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[262775]: [WARNING]  (262779) : All workers exited. Exiting... (0)
Jan 30 23:54:21 np0005603435 systemd[1]: libpod-5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34.scope: Deactivated successfully.
Jan 30 23:54:21 np0005603435 podman[263050]: 2026-01-31 04:54:21.828090635 +0000 UTC m=+0.046009671 container died 5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.853 239942 INFO nova.virt.libvirt.driver [-] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Instance destroyed successfully.#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.855 239942 DEBUG nova.objects.instance [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lazy-loading 'resources' on Instance uuid 2437d98a-1c5d-4451-bf32-cb4bb2d82a82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:54:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay-504ba492119f03416ca09172c6e477e20842594499962ad5933b1b6a722ab49f-merged.mount: Deactivated successfully.
Jan 30 23:54:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34-userdata-shm.mount: Deactivated successfully.
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.877 239942 DEBUG nova.virt.libvirt.vif [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-177511905',display_name='tempest-TransferEncryptedVolumeTest-server-177511905',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-177511905',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB828SO4KCiS/c6FYV17F5UX+BLYIRAc4CyTZA4fXDNG/eieZI8ChuIejzpTuF2CfgKMQEbMYMZVWf9xnEOSXNVsZsXIi11a3wsxGw0mmNb26j9vmggnToYyQthSze7emg==',key_name='tempest-TransferEncryptedVolumeTest-938095670',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:54:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-sq77q9js',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:54:02Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=2437d98a-1c5d-4451-bf32-cb4bb2d82a82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:54:21 np0005603435 podman[263050]: 2026-01-31 04:54:21.878920473 +0000 UTC m=+0.096839539 container cleanup 5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.878 239942 DEBUG nova.network.os_vif_util [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "a032608c-fd47-442f-a668-0d122437d8c8", "address": "fa:16:3e:c8:62:e7", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa032608c-fd", "ovs_interfaceid": "a032608c-fd47-442f-a668-0d122437d8c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.879 239942 DEBUG nova.network.os_vif_util [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:62:e7,bridge_name='br-int',has_traffic_filtering=True,id=a032608c-fd47-442f-a668-0d122437d8c8,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa032608c-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.880 239942 DEBUG os_vif [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:62:e7,bridge_name='br-int',has_traffic_filtering=True,id=a032608c-fd47-442f-a668-0d122437d8c8,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa032608c-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.882 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.883 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa032608c-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.884 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 systemd[1]: libpod-conmon-5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34.scope: Deactivated successfully.
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.888 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.891 239942 INFO os_vif [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:62:e7,bridge_name='br-int',has_traffic_filtering=True,id=a032608c-fd47-442f-a668-0d122437d8c8,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa032608c-fd')#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.923 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.924 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.924 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.925 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.926 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:21 np0005603435 podman[263093]: 2026-01-31 04:54:21.938395983 +0000 UTC m=+0.041246124 container remove 5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.943 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bef89b-5c83-4887-a956-d6d0d6585d6c]: (4, ('Sat Jan 31 04:54:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b (5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34)\n5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34\nSat Jan 31 04:54:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b (5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34)\n5d4f24d04a2b16684ea9d74927251ff605544445addadefc6e49abdf68865e34\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.945 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f2f767-f919-4674-b9b9-0d2398f8b2da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.946 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa10d9666-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.949 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 kernel: tapa10d9666-b0: left promiscuous mode
Jan 30 23:54:21 np0005603435 nova_compute[239938]: 2026-01-31 04:54:21.960 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.963 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3095136d-918c-40af-988c-b8e9d8772171]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.979 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[425741d8-3249-45df-bfa2-2f5d298863f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.981 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3eede6c6-8824-4b4a-85b7-433ef5ebe15a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:21.996 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[652b4709-3bbe-4390-8906-1cb27455cbab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429366, 'reachable_time': 33411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263131, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:22 np0005603435 systemd[1]: run-netns-ovnmeta\x2da10d9666\x2db672\x2d4619\x2d83b7\x2d22dc781b5b5b.mount: Deactivated successfully.
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.002 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.002 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecd231e-02cf-4bda-b2f1-da3a490d9550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.074 239942 INFO nova.virt.libvirt.driver [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Deleting instance files /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82_del#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.076 239942 INFO nova.virt.libvirt.driver [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Deletion of /var/lib/nova/instances/2437d98a-1c5d-4451-bf32-cb4bb2d82a82_del complete#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.197 239942 INFO nova.compute.manager [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.198 239942 DEBUG oslo.service.loopingcall [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.198 239942 DEBUG nova.compute.manager [-] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.198 239942 DEBUG nova.network.neutron [-] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:54:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:54:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454768077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.422 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.489 239942 DEBUG nova.compute.manager [req-30440a21-1bfe-479e-8a33-afe89c33bbba req-bf95a865-4bed-4562-90dd-e1f92dca52c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received event network-vif-unplugged-a032608c-fd47-442f-a668-0d122437d8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.490 239942 DEBUG oslo_concurrency.lockutils [req-30440a21-1bfe-479e-8a33-afe89c33bbba req-bf95a865-4bed-4562-90dd-e1f92dca52c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.490 239942 DEBUG oslo_concurrency.lockutils [req-30440a21-1bfe-479e-8a33-afe89c33bbba req-bf95a865-4bed-4562-90dd-e1f92dca52c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.491 239942 DEBUG oslo_concurrency.lockutils [req-30440a21-1bfe-479e-8a33-afe89c33bbba req-bf95a865-4bed-4562-90dd-e1f92dca52c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.491 239942 DEBUG nova.compute.manager [req-30440a21-1bfe-479e-8a33-afe89c33bbba req-bf95a865-4bed-4562-90dd-e1f92dca52c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] No waiting events found dispatching network-vif-unplugged-a032608c-fd47-442f-a668-0d122437d8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.491 239942 DEBUG nova.compute.manager [req-30440a21-1bfe-479e-8a33-afe89c33bbba req-bf95a865-4bed-4562-90dd-e1f92dca52c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received event network-vif-unplugged-a032608c-fd47-442f-a668-0d122437d8c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.493 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.493 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.533 239942 INFO nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Creating config drive at /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37/disk.config#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.541 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzsupu6ru execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.596 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.597 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.597 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.634 239942 DEBUG nova.network.neutron [req-0e51a6f1-0325-4022-99e9-63b15b0ed8b6 req-f65e9ca2-381e-4f8b-976d-c2cb4c00bb4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updated VIF entry in instance network info cache for port 9bfb8d4f-c12b-4a91-950a-4519f14d6508. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.636 239942 DEBUG nova.network.neutron [req-0e51a6f1-0325-4022-99e9-63b15b0ed8b6 req-f65e9ca2-381e-4f8b-976d-c2cb4c00bb4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updating instance_info_cache with network_info: [{"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.651 239942 DEBUG oslo_concurrency.lockutils [req-0e51a6f1-0325-4022-99e9-63b15b0ed8b6 req-f65e9ca2-381e-4f8b-976d-c2cb4c00bb4c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.659 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzsupu6ru" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.691 239942 DEBUG nova.storage.rbd_utils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image a7e679f6-843b-49b7-8455-d5ed363e1b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.695 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37/disk.config a7e679f6-843b-49b7-8455-d5ed363e1b37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.783 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.785 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4423MB free_disk=59.98780508711934GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.786 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.786 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.853 239942 DEBUG oslo_concurrency.processutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37/disk.config a7e679f6-843b-49b7-8455-d5ed363e1b37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.854 239942 INFO nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Deleting local config drive /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37/disk.config because it was imported into RBD.#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.862 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 2437d98a-1c5d-4451-bf32-cb4bb2d82a82 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.863 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance a7e679f6-843b-49b7-8455-d5ed363e1b37 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.864 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.864 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:54:22 np0005603435 kernel: tap9bfb8d4f-c1: entered promiscuous mode
Jan 30 23:54:22 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:22Z|00174|binding|INFO|Claiming lport 9bfb8d4f-c12b-4a91-950a-4519f14d6508 for this chassis.
Jan 30 23:54:22 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:22Z|00175|binding|INFO|9bfb8d4f-c12b-4a91-950a-4519f14d6508: Claiming fa:16:3e:c0:7f:92 10.100.0.5
Jan 30 23:54:22 np0005603435 NetworkManager[49097]: <info>  [1769835262.9067] manager: (tap9bfb8d4f-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.906 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:22 np0005603435 systemd-udevd[263014]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.915 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:7f:92 10.100.0.5'], port_security=['fa:16:3e:c0:7f:92 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a7e679f6-843b-49b7-8455-d5ed363e1b37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2f8cd9ed-4d8b-4b1c-bbb9-b9d75bc8e46f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=9bfb8d4f-c12b-4a91-950a-4519f14d6508) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.917 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 9bfb8d4f-c12b-4a91-950a-4519f14d6508 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 bound to our chassis#033[00m
Jan 30 23:54:22 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:22Z|00176|binding|INFO|Setting lport 9bfb8d4f-c12b-4a91-950a-4519f14d6508 ovn-installed in OVS
Jan 30 23:54:22 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:22Z|00177|binding|INFO|Setting lport 9bfb8d4f-c12b-4a91-950a-4519f14d6508 up in Southbound
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.921 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.922 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:54:22 np0005603435 NetworkManager[49097]: <info>  [1769835262.9268] device (tap9bfb8d4f-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:54:22 np0005603435 NetworkManager[49097]: <info>  [1769835262.9278] device (tap9bfb8d4f-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:54:22 np0005603435 nova_compute[239938]: 2026-01-31 04:54:22.935 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:22 np0005603435 systemd-machined[208030]: New machine qemu-17-instance-00000011.
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.942 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1691c397-0931-413c-beb2-ed9fc1c31bc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.943 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b0cf2db-21 in ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.945 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b0cf2db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.945 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[57890c9f-067a-4de2-b433-971cfc1f059c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.946 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ff0551-3342-4440-998e-2a4c93963bf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:22 np0005603435 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.957 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[04ff6b4e-a7fb-493c-8fdc-e99f3ca491be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:22.980 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfbbc51-ac07-4122-9da6-9561f85686af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.008 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[e58d4ede-984f-4c1d-9f0b-77d94eea4d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.014 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6606bfcb-f86c-4621-811f-e12cadbd37de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 NetworkManager[49097]: <info>  [1769835263.0164] manager: (tap5b0cf2db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.054 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[642a92a7-1d60-454f-9885-b6060ad01f4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.058 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a90fe8-b1f2-410c-a236-1bb2de76e8e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 NetworkManager[49097]: <info>  [1769835263.0788] device (tap5b0cf2db-20): carrier: link connected
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.083 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[5570d9ac-925a-473d-a304-1486d81428c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.095 239942 DEBUG nova.network.neutron [-] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.097 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e55d91-6e33-4467-8a77-4982200380b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431748, 'reachable_time': 30176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263258, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.112 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[daad1e63-66e5-4890-a2e9-876f441e11b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:f719'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431748, 'tstamp': 431748}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263259, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.124 239942 INFO nova.compute.manager [-] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Took 0.93 seconds to deallocate network for instance.#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.124 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[63d774f5-e708-46a7-a0d7-a59ba5f6e3c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431748, 'reachable_time': 30176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263260, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.150 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6560db4b-eb23-418c-8c67-d5ff3eba4038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.189 239942 DEBUG nova.compute.manager [req-a14783c7-cdd5-441d-a4ee-77c084fc4693 req-5c35f520-04b2-4814-a9a4-c779af732a30 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received event network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.190 239942 DEBUG oslo_concurrency.lockutils [req-a14783c7-cdd5-441d-a4ee-77c084fc4693 req-5c35f520-04b2-4814-a9a4-c779af732a30 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.190 239942 DEBUG oslo_concurrency.lockutils [req-a14783c7-cdd5-441d-a4ee-77c084fc4693 req-5c35f520-04b2-4814-a9a4-c779af732a30 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.191 239942 DEBUG oslo_concurrency.lockutils [req-a14783c7-cdd5-441d-a4ee-77c084fc4693 req-5c35f520-04b2-4814-a9a4-c779af732a30 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.191 239942 DEBUG nova.compute.manager [req-a14783c7-cdd5-441d-a4ee-77c084fc4693 req-5c35f520-04b2-4814-a9a4-c779af732a30 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Processing event network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.206 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[38bfe0f0-a8a6-4874-9d8a-8bbde4c3bdeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.207 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.208 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.208 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.209 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:23 np0005603435 NetworkManager[49097]: <info>  [1769835263.2108] manager: (tap5b0cf2db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Jan 30 23:54:23 np0005603435 kernel: tap5b0cf2db-20: entered promiscuous mode
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.212 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.213 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.214 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:23 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:23Z|00178|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.226 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.227 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.228 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[16ec52e3-0fe1-45f0-9f20-7dc063604c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.229 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.229 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'env', 'PROCESS_TAG=haproxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.281 239942 INFO nova.compute.manager [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.348 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 317 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 171 KiB/s rd, 4.8 MiB/s wr, 123 op/s
Jan 30 23:54:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:54:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192151379' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.490 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.492 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835263.4916403, a7e679f6-843b-49b7-8455-d5ed363e1b37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.493 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] VM Started (Lifecycle Event)#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.495 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.501 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.503 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.509 239942 INFO nova.virt.libvirt.driver [-] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Instance spawned successfully.#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.510 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.512 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.515 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.521 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.537 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.538 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.538 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.539 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.539 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.540 239942 DEBUG nova.virt.libvirt.driver [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.545 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.545 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835263.493317, a7e679f6-843b-49b7-8455-d5ed363e1b37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.546 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.547 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.548 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.549 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.572 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.580 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835263.499476, a7e679f6-843b-49b7-8455-d5ed363e1b37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.580 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.597 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:23.599 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.600 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.603 239942 INFO nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Took 4.45 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.604 239942 DEBUG nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:23 np0005603435 podman[263336]: 2026-01-31 04:54:23.612778551 +0000 UTC m=+0.058770724 container create 692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.615 239942 DEBUG oslo_concurrency.processutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.633 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:54:23 np0005603435 systemd[1]: Started libpod-conmon-692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b.scope.
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.676 239942 INFO nova.compute.manager [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Took 6.86 seconds to build instance.#033[00m
Jan 30 23:54:23 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:54:23 np0005603435 podman[263336]: 2026-01-31 04:54:23.586284241 +0000 UTC m=+0.032276464 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:54:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7c147f6ba0a2a7a163f41364eabddbee9802aa731d2110994f6c996f82bb35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:23 np0005603435 podman[263336]: 2026-01-31 04:54:23.6954269 +0000 UTC m=+0.141419073 container init 692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:54:23 np0005603435 nova_compute[239938]: 2026-01-31 04:54:23.698 239942 DEBUG oslo_concurrency.lockutils [None req-8b679f43-f288-4e30-9d25-2ad7219460bc e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:23 np0005603435 podman[263336]: 2026-01-31 04:54:23.701392357 +0000 UTC m=+0.147384530 container start 692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 30 23:54:23 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[263353]: [NOTICE]   (263357) : New worker (263360) forked
Jan 30 23:54:23 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[263353]: [NOTICE]   (263357) : Loading success.
Jan 30 23:54:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:54:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289701339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.172 239942 DEBUG oslo_concurrency.processutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.179 239942 DEBUG nova.compute.provider_tree [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.211 239942 DEBUG nova.scheduler.client.report [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.235 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.266 239942 INFO nova.scheduler.client.report [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Deleted allocations for instance 2437d98a-1c5d-4451-bf32-cb4bb2d82a82#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.368 239942 DEBUG oslo_concurrency.lockutils [None req-97567c85-a8a8-4141-9e4d-c9b2bbb31f55 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.548 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.549 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.549 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:54:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Jan 30 23:54:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.592 239942 DEBUG nova.compute.manager [req-1812d582-f3c4-472c-b71d-f95a238443e2 req-b2b6f7fb-7dff-4958-8ab7-e2158ff7a7b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received event network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.593 239942 DEBUG oslo_concurrency.lockutils [req-1812d582-f3c4-472c-b71d-f95a238443e2 req-b2b6f7fb-7dff-4958-8ab7-e2158ff7a7b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.594 239942 DEBUG oslo_concurrency.lockutils [req-1812d582-f3c4-472c-b71d-f95a238443e2 req-b2b6f7fb-7dff-4958-8ab7-e2158ff7a7b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.595 239942 DEBUG oslo_concurrency.lockutils [req-1812d582-f3c4-472c-b71d-f95a238443e2 req-b2b6f7fb-7dff-4958-8ab7-e2158ff7a7b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2437d98a-1c5d-4451-bf32-cb4bb2d82a82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.596 239942 DEBUG nova.compute.manager [req-1812d582-f3c4-472c-b71d-f95a238443e2 req-b2b6f7fb-7dff-4958-8ab7-e2158ff7a7b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] No waiting events found dispatching network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.598 239942 WARNING nova.compute.manager [req-1812d582-f3c4-472c-b71d-f95a238443e2 req-b2b6f7fb-7dff-4958-8ab7-e2158ff7a7b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received unexpected event network-vif-plugged-a032608c-fd47-442f-a668-0d122437d8c8 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.598 239942 DEBUG nova.compute.manager [req-1812d582-f3c4-472c-b71d-f95a238443e2 req-b2b6f7fb-7dff-4958-8ab7-e2158ff7a7b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Received event network-vif-deleted-a032608c-fd47-442f-a668-0d122437d8c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.715 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835249.7139556, 961014c5-246e-4bd6-b7e8-86d49599034a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.716 239942 INFO nova.compute.manager [-] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.735 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.736 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.737 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.737 239942 DEBUG nova.objects.instance [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a7e679f6-843b-49b7-8455-d5ed363e1b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.741 239942 DEBUG nova.compute.manager [None req-0f2ef351-eb14-42e7-9424-e1bb9718a698 - - - - - -] [instance: 961014c5-246e-4bd6-b7e8-86d49599034a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:24 np0005603435 nova_compute[239938]: 2026-01-31 04:54:24.972 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.315 239942 DEBUG nova.compute.manager [req-83a11144-0fa8-4b5d-aa02-78b7d86bb22f req-5e016f60-adea-48da-bcc9-f3ad5b09cbc0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received event network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.316 239942 DEBUG oslo_concurrency.lockutils [req-83a11144-0fa8-4b5d-aa02-78b7d86bb22f req-5e016f60-adea-48da-bcc9-f3ad5b09cbc0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.317 239942 DEBUG oslo_concurrency.lockutils [req-83a11144-0fa8-4b5d-aa02-78b7d86bb22f req-5e016f60-adea-48da-bcc9-f3ad5b09cbc0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.318 239942 DEBUG oslo_concurrency.lockutils [req-83a11144-0fa8-4b5d-aa02-78b7d86bb22f req-5e016f60-adea-48da-bcc9-f3ad5b09cbc0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.318 239942 DEBUG nova.compute.manager [req-83a11144-0fa8-4b5d-aa02-78b7d86bb22f req-5e016f60-adea-48da-bcc9-f3ad5b09cbc0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] No waiting events found dispatching network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.319 239942 WARNING nova.compute.manager [req-83a11144-0fa8-4b5d-aa02-78b7d86bb22f req-5e016f60-adea-48da-bcc9-f3ad5b09cbc0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received unexpected event network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:54:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 317 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 62 KiB/s wr, 135 op/s
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.982 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updating instance_info_cache with network_info: [{"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.997 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.998 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.998 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:25 np0005603435 nova_compute[239938]: 2026-01-31 04:54:25.999 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:54:26 np0005603435 nova_compute[239938]: 2026-01-31 04:54:26.933 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 317 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 60 KiB/s wr, 231 op/s
Jan 30 23:54:27 np0005603435 nova_compute[239938]: 2026-01-31 04:54:27.408 239942 DEBUG nova.compute.manager [req-e3c4e7e3-68b3-4f9e-952e-4684127623b2 req-512ad6c0-ac91-48d4-9ff2-88d3cefda12d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received event network-changed-9bfb8d4f-c12b-4a91-950a-4519f14d6508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:27 np0005603435 nova_compute[239938]: 2026-01-31 04:54:27.408 239942 DEBUG nova.compute.manager [req-e3c4e7e3-68b3-4f9e-952e-4684127623b2 req-512ad6c0-ac91-48d4-9ff2-88d3cefda12d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Refreshing instance network info cache due to event network-changed-9bfb8d4f-c12b-4a91-950a-4519f14d6508. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:54:27 np0005603435 nova_compute[239938]: 2026-01-31 04:54:27.408 239942 DEBUG oslo_concurrency.lockutils [req-e3c4e7e3-68b3-4f9e-952e-4684127623b2 req-512ad6c0-ac91-48d4-9ff2-88d3cefda12d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:54:27 np0005603435 nova_compute[239938]: 2026-01-31 04:54:27.408 239942 DEBUG oslo_concurrency.lockutils [req-e3c4e7e3-68b3-4f9e-952e-4684127623b2 req-512ad6c0-ac91-48d4-9ff2-88d3cefda12d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:54:27 np0005603435 nova_compute[239938]: 2026-01-31 04:54:27.408 239942 DEBUG nova.network.neutron [req-e3c4e7e3-68b3-4f9e-952e-4684127623b2 req-512ad6c0-ac91-48d4-9ff2-88d3cefda12d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Refreshing network info cache for port 9bfb8d4f-c12b-4a91-950a-4519f14d6508 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:54:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:54:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2996735845' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:54:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:54:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2996735845' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:54:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Jan 30 23:54:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Jan 30 23:54:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Jan 30 23:54:28 np0005603435 podman[263389]: 2026-01-31 04:54:28.114598426 +0000 UTC m=+0.068804660 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 30 23:54:28 np0005603435 podman[263390]: 2026-01-31 04:54:28.153342247 +0000 UTC m=+0.103152733 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:54:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Jan 30 23:54:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Jan 30 23:54:29 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Jan 30 23:54:29 np0005603435 nova_compute[239938]: 2026-01-31 04:54:29.327 239942 DEBUG nova.network.neutron [req-e3c4e7e3-68b3-4f9e-952e-4684127623b2 req-512ad6c0-ac91-48d4-9ff2-88d3cefda12d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updated VIF entry in instance network info cache for port 9bfb8d4f-c12b-4a91-950a-4519f14d6508. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:54:29 np0005603435 nova_compute[239938]: 2026-01-31 04:54:29.328 239942 DEBUG nova.network.neutron [req-e3c4e7e3-68b3-4f9e-952e-4684127623b2 req-512ad6c0-ac91-48d4-9ff2-88d3cefda12d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updating instance_info_cache with network_info: [{"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:29 np0005603435 nova_compute[239938]: 2026-01-31 04:54:29.341 239942 DEBUG oslo_concurrency.lockutils [req-e3c4e7e3-68b3-4f9e-952e-4684127623b2 req-512ad6c0-ac91-48d4-9ff2-88d3cefda12d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:54:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 317 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 59 KiB/s wr, 220 op/s
Jan 30 23:54:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:54:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/569561452' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:54:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:54:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/569561452' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:54:29 np0005603435 nova_compute[239938]: 2026-01-31 04:54:29.978 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:54:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/857752688' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:54:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:54:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/857752688' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:54:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 317 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 52 KiB/s wr, 196 op/s
Jan 30 23:54:31 np0005603435 nova_compute[239938]: 2026-01-31 04:54:31.981 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:54:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1495897484' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:54:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:54:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1495897484' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:54:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 317 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 179 op/s
Jan 30 23:54:34 np0005603435 nova_compute[239938]: 2026-01-31 04:54:34.854 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "7556d66b-f5c2-4050-9684-0e513ae8c697" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:34 np0005603435 nova_compute[239938]: 2026-01-31 04:54:34.854 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:34 np0005603435 nova_compute[239938]: 2026-01-31 04:54:34.894 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:54:34 np0005603435 nova_compute[239938]: 2026-01-31 04:54:34.978 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:34 np0005603435 nova_compute[239938]: 2026-01-31 04:54:34.979 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:34 np0005603435 nova_compute[239938]: 2026-01-31 04:54:34.979 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:34 np0005603435 nova_compute[239938]: 2026-01-31 04:54:34.990 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:54:34 np0005603435 nova_compute[239938]: 2026-01-31 04:54:34.990 239942 INFO nova.compute.claims [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:54:35 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:35Z|00179|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:54:35 np0005603435 nova_compute[239938]: 2026-01-31 04:54:35.127 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:35 np0005603435 nova_compute[239938]: 2026-01-31 04:54:35.150 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 327 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 1.3 MiB/s wr, 106 op/s
Jan 30 23:54:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:54:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337090495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:54:35 np0005603435 nova_compute[239938]: 2026-01-31 04:54:35.678 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:35 np0005603435 nova_compute[239938]: 2026-01-31 04:54:35.685 239942 DEBUG nova.compute.provider_tree [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:54:35 np0005603435 nova_compute[239938]: 2026-01-31 04:54:35.782 239942 DEBUG nova.scheduler.client.report [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:54:35 np0005603435 nova_compute[239938]: 2026-01-31 04:54:35.925 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:35 np0005603435 nova_compute[239938]: 2026-01-31 04:54:35.926 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:54:35 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:35Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:7f:92 10.100.0.5
Jan 30 23:54:35 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:35Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:7f:92 10.100.0.5
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.029 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.030 239942 DEBUG nova.network.neutron [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.123 239942 INFO nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.261 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.302 239942 DEBUG nova.policy [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '27f1a6fb472c4c5fa2286d0fa48dca34', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b39f0e168b54a4b8f976894d21361e6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.535 239942 INFO nova.virt.block_device [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Booting with volume 4f228222-15a8-4d83-9c16-585b710e0685 at /dev/vda#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.701 239942 DEBUG os_brick.utils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.703 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.716 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.716 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5cd669-f0a8-4f2a-8c7a-746a6d0cf11d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.718 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.725 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.725 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed979de-c3bb-445f-8368-0d0022e0314f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.727 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.737 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.737 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[c9081a98-a413-4439-a5d1-aea71a38662e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.738 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[8520577c-7ae5-4231-8dd3-951f9202be38]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.739 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.762 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.766 239942 DEBUG os_brick.initiator.connectors.lightos [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.766 239942 DEBUG os_brick.initiator.connectors.lightos [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.767 239942 DEBUG os_brick.initiator.connectors.lightos [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.768 239942 DEBUG os_brick.utils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.768 239942 DEBUG nova.virt.block_device [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Updating existing volume attachment record: 2a2297f7-3f38-47a2-b436-6c62b3eaa575 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.851 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835261.850857, 2437d98a-1c5d-4451-bf32-cb4bb2d82a82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.852 239942 INFO nova.compute.manager [-] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:54:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:54:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:54:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:54:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:54:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:54:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:54:36 np0005603435 nova_compute[239938]: 2026-01-31 04:54:36.983 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:37 np0005603435 nova_compute[239938]: 2026-01-31 04:54:37.044 239942 DEBUG nova.compute.manager [None req-80e95fe0-692e-4567-b63a-a6c9b62a2388 - - - - - -] [instance: 2437d98a-1c5d-4451-bf32-cb4bb2d82a82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 350 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 2.7 MiB/s wr, 143 op/s
Jan 30 23:54:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:54:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2715156037' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:54:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Jan 30 23:54:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Jan 30 23:54:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.074 239942 DEBUG nova.network.neutron [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Successfully created port: 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.213 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.215 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.215 239942 INFO nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Creating image(s)#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.216 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.216 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Ensure instance console log exists: /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.217 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.217 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.218 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.864 239942 DEBUG nova.network.neutron [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Successfully updated port: 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.895 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.895 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquired lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.896 239942 DEBUG nova.network.neutron [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.968 239942 DEBUG nova.compute.manager [req-5282a4ff-67ed-48cc-b9ac-e6780ca1ac47 req-3fbe8ab7-2fe3-49c8-ae5a-4284b21c5f93 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received event network-changed-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.968 239942 DEBUG nova.compute.manager [req-5282a4ff-67ed-48cc-b9ac-e6780ca1ac47 req-3fbe8ab7-2fe3-49c8-ae5a-4284b21c5f93 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Refreshing instance network info cache due to event network-changed-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:54:38 np0005603435 nova_compute[239938]: 2026-01-31 04:54:38.969 239942 DEBUG oslo_concurrency.lockutils [req-5282a4ff-67ed-48cc-b9ac-e6780ca1ac47 req-3fbe8ab7-2fe3-49c8-ae5a-4284b21c5f93 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:54:39 np0005603435 nova_compute[239938]: 2026-01-31 04:54:39.091 239942 DEBUG nova.network.neutron [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:54:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 350 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.012 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.139 239942 DEBUG nova.network.neutron [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Updating instance_info_cache with network_info: [{"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.169 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Releasing lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.170 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Instance network_info: |[{"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.172 239942 DEBUG oslo_concurrency.lockutils [req-5282a4ff-67ed-48cc-b9ac-e6780ca1ac47 req-3fbe8ab7-2fe3-49c8-ae5a-4284b21c5f93 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.172 239942 DEBUG nova.network.neutron [req-5282a4ff-67ed-48cc-b9ac-e6780ca1ac47 req-3fbe8ab7-2fe3-49c8-ae5a-4284b21c5f93 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Refreshing network info cache for port 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.177 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Start _get_guest_xml network_info=[{"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '2a2297f7-3f38-47a2-b436-6c62b3eaa575', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4f228222-15a8-4d83-9c16-585b710e0685', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4f228222-15a8-4d83-9c16-585b710e0685', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7556d66b-f5c2-4050-9684-0e513ae8c697', 'attached_at': '', 'detached_at': '', 'volume_id': '4f228222-15a8-4d83-9c16-585b710e0685', 'serial': '4f228222-15a8-4d83-9c16-585b710e0685'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.183 239942 WARNING nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.193 239942 DEBUG nova.virt.libvirt.host [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.199 239942 DEBUG nova.virt.libvirt.host [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.212 239942 DEBUG nova.virt.libvirt.host [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.212 239942 DEBUG nova.virt.libvirt.host [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.213 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.213 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.214 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.215 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.215 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.215 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.216 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.216 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.217 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.217 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.217 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.218 239942 DEBUG nova.virt.hardware [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.254 239942 DEBUG nova.storage.rbd_utils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 7556d66b-f5c2-4050-9684-0e513ae8c697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.258 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1883546088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:54:40 np0005603435 nova_compute[239938]: 2026-01-31 04:54:40.830 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:54:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.199 239942 DEBUG os_brick.encryptors [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Using volume encryption metadata '{'encryption_key_id': '1085627e-d803-48ef-8afc-864628b07c27', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4f228222-15a8-4d83-9c16-585b710e0685', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4f228222-15a8-4d83-9c16-585b710e0685', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7556d66b-f5c2-4050-9684-0e513ae8c697', 'attached_at': '', 'detached_at': '', 'volume_id': '4f228222-15a8-4d83-9c16-585b710e0685', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.201 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.217 239942 DEBUG barbicanclient.v1.secrets [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/1085627e-d803-48ef-8afc-864628b07c27 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.217 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.237 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.238 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.261 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.262 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 podman[263642]: 2026-01-31 04:54:41.266647312 +0000 UTC m=+0.051960977 container create 5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:54:41 np0005603435 systemd[1]: Started libpod-conmon-5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65.scope.
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.308 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.309 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:54:41 np0005603435 podman[263642]: 2026-01-31 04:54:41.237748572 +0000 UTC m=+0.023062287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.332 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.333 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 podman[263642]: 2026-01-31 04:54:41.343858098 +0000 UTC m=+0.129171823 container init 5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 30 23:54:41 np0005603435 podman[263642]: 2026-01-31 04:54:41.351767502 +0000 UTC m=+0.137081127 container start 5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:54:41 np0005603435 podman[263642]: 2026-01-31 04:54:41.354665943 +0000 UTC m=+0.139979718 container attach 5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bartik, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:54:41 np0005603435 agitated_bartik[263659]: 167 167
Jan 30 23:54:41 np0005603435 systemd[1]: libpod-5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65.scope: Deactivated successfully.
Jan 30 23:54:41 np0005603435 podman[263642]: 2026-01-31 04:54:41.357933643 +0000 UTC m=+0.143247268 container died 5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bartik, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.357 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.358 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.379 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.380 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 systemd[1]: var-lib-containers-storage-overlay-348e03b1cd4ebedefad504c3a8dad97973ffa41a3b6ea9920c6d476f2a88d1cf-merged.mount: Deactivated successfully.
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.409 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.410 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 437 KiB/s rd, 2.6 MiB/s wr, 135 op/s
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.437 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.437 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 podman[263642]: 2026-01-31 04:54:41.438362268 +0000 UTC m=+0.223675923 container remove 5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bartik, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:54:41 np0005603435 systemd[1]: libpod-conmon-5b01aa6c067ce86c84122d30c7e2fb877e43b2f4e586665110b7fc176c8ede65.scope: Deactivated successfully.
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.460 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.461 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:54:41 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:54:41 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.497 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.498 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.517 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.518 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.553 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.554 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.579 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.580 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.607 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.607 239942 INFO barbicanclient.base [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/1085627e-d803-48ef-8afc-864628b07c27#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.618 239942 DEBUG nova.network.neutron [req-5282a4ff-67ed-48cc-b9ac-e6780ca1ac47 req-3fbe8ab7-2fe3-49c8-ae5a-4284b21c5f93 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Updated VIF entry in instance network info cache for port 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.618 239942 DEBUG nova.network.neutron [req-5282a4ff-67ed-48cc-b9ac-e6780ca1ac47 req-3fbe8ab7-2fe3-49c8-ae5a-4284b21c5f93 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Updating instance_info_cache with network_info: [{"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.628 239942 DEBUG barbicanclient.client [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.628 239942 DEBUG nova.virt.libvirt.host [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <volume>4f228222-15a8-4d83-9c16-585b710e0685</volume>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  </usage>
Jan 30 23:54:41 np0005603435 nova_compute[239938]: </secret>
Jan 30 23:54:41 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.633 239942 DEBUG oslo_concurrency.lockutils [req-5282a4ff-67ed-48cc-b9ac-e6780ca1ac47 req-3fbe8ab7-2fe3-49c8-ae5a-4284b21c5f93 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:54:41 np0005603435 podman[263681]: 2026-01-31 04:54:41.667426232 +0000 UTC m=+0.110329990 container create b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 30 23:54:41 np0005603435 podman[263681]: 2026-01-31 04:54:41.580574569 +0000 UTC m=+0.023478297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.700 239942 DEBUG nova.virt.libvirt.vif [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1259685471',display_name='tempest-TransferEncryptedVolumeTest-server-1259685471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1259685471',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB828SO4KCiS/c6FYV17F5UX+BLYIRAc4CyTZA4fXDNG/eieZI8ChuIejzpTuF2CfgKMQEbMYMZVWf9xnEOSXNVsZsXIi11a3wsxGw0mmNb26j9vmggnToYyQthSze7emg==',key_name='tempest-TransferEncryptedVolumeTest-938095670',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-6hv1a9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:54:36Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=7556d66b-f5c2-4050-9684-0e513ae8c697,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.701 239942 DEBUG nova.network.os_vif_util [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.701 239942 DEBUG nova.network.os_vif_util [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:64:9e,bridge_name='br-int',has_traffic_filtering=True,id=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1df8885b-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.704 239942 DEBUG nova.objects.instance [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7556d66b-f5c2-4050-9684-0e513ae8c697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:54:41 np0005603435 systemd[1]: Started libpod-conmon-b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e.scope.
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.717 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <uuid>7556d66b-f5c2-4050-9684-0e513ae8c697</uuid>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <name>instance-00000012</name>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1259685471</nova:name>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:54:40</nova:creationTime>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <nova:user uuid="27f1a6fb472c4c5fa2286d0fa48dca34">tempest-TransferEncryptedVolumeTest-483286292-project-member</nova:user>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <nova:project uuid="9b39f0e168b54a4b8f976894d21361e6">tempest-TransferEncryptedVolumeTest-483286292</nova:project>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <nova:port uuid="1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <entry name="serial">7556d66b-f5c2-4050-9684-0e513ae8c697</entry>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <entry name="uuid">7556d66b-f5c2-4050-9684-0e513ae8c697</entry>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/7556d66b-f5c2-4050-9684-0e513ae8c697_disk.config">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-4f228222-15a8-4d83-9c16-585b710e0685">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <serial>4f228222-15a8-4d83-9c16-585b710e0685</serial>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <encryption format="luks">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:        <secret type="passphrase" uuid="aa49b122-f620-4053-bb4a-153c85d4522c"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      </encryption>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:8e:64:9e"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <target dev="tap1df8885b-d7"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697/console.log" append="off"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:54:41 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:54:41 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:54:41 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:54:41 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.717 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Preparing to wait for external event network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.717 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.718 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.718 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.719 239942 DEBUG nova.virt.libvirt.vif [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1259685471',display_name='tempest-TransferEncryptedVolumeTest-server-1259685471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1259685471',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB828SO4KCiS/c6FYV17F5UX+BLYIRAc4CyTZA4fXDNG/eieZI8ChuIejzpTuF2CfgKMQEbMYMZVWf9xnEOSXNVsZsXIi11a3wsxGw0mmNb26j9vmggnToYyQthSze7emg==',key_name='tempest-TransferEncryptedVolumeTest-938095670',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-6hv1a9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:54:36Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=7556d66b-f5c2-4050-9684-0e513ae8c697,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.719 239942 DEBUG nova.network.os_vif_util [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.724 239942 DEBUG nova.network.os_vif_util [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:64:9e,bridge_name='br-int',has_traffic_filtering=True,id=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1df8885b-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.725 239942 DEBUG os_vif [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:64:9e,bridge_name='br-int',has_traffic_filtering=True,id=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1df8885b-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.726 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.726 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.727 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.731 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:41 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.732 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1df8885b-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.733 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1df8885b-d7, col_values=(('external_ids', {'iface-id': '1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:64:9e', 'vm-uuid': '7556d66b-f5c2-4050-9684-0e513ae8c697'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.735 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80b33bd057e1f2a6197e9e3bf653e304d3aa80a327b98779106a2e7b955d687/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:41 np0005603435 NetworkManager[49097]: <info>  [1769835281.7380] manager: (tap1df8885b-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 30 23:54:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80b33bd057e1f2a6197e9e3bf653e304d3aa80a327b98779106a2e7b955d687/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80b33bd057e1f2a6197e9e3bf653e304d3aa80a327b98779106a2e7b955d687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80b33bd057e1f2a6197e9e3bf653e304d3aa80a327b98779106a2e7b955d687/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:41 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80b33bd057e1f2a6197e9e3bf653e304d3aa80a327b98779106a2e7b955d687/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.736 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.745 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.748 239942 INFO os_vif [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:64:9e,bridge_name='br-int',has_traffic_filtering=True,id=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1df8885b-d7')#033[00m
Jan 30 23:54:41 np0005603435 podman[263681]: 2026-01-31 04:54:41.779461882 +0000 UTC m=+0.222365630 container init b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_khorana, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:54:41 np0005603435 podman[263681]: 2026-01-31 04:54:41.790950774 +0000 UTC m=+0.233854532 container start b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_khorana, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:54:41 np0005603435 podman[263681]: 2026-01-31 04:54:41.79689763 +0000 UTC m=+0.239801378 container attach b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_khorana, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.821 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.822 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.823 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No VIF found with MAC fa:16:3e:8e:64:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.825 239942 INFO nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Using config drive#033[00m
Jan 30 23:54:41 np0005603435 nova_compute[239938]: 2026-01-31 04:54:41.853 239942 DEBUG nova.storage.rbd_utils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 7556d66b-f5c2-4050-9684-0e513ae8c697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.153 239942 INFO nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Creating config drive at /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697/disk.config#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.156 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpc55eej8b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:42 np0005603435 intelligent_khorana[263698]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:54:42 np0005603435 intelligent_khorana[263698]: --> All data devices are unavailable
Jan 30 23:54:42 np0005603435 systemd[1]: libpod-b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e.scope: Deactivated successfully.
Jan 30 23:54:42 np0005603435 podman[263681]: 2026-01-31 04:54:42.273123692 +0000 UTC m=+0.716027450 container died b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.279 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpc55eej8b" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:42 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a80b33bd057e1f2a6197e9e3bf653e304d3aa80a327b98779106a2e7b955d687-merged.mount: Deactivated successfully.
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.327 239942 DEBUG nova.storage.rbd_utils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 7556d66b-f5c2-4050-9684-0e513ae8c697_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.332 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697/disk.config 7556d66b-f5c2-4050-9684-0e513ae8c697_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:54:42 np0005603435 podman[263681]: 2026-01-31 04:54:42.336696453 +0000 UTC m=+0.779600201 container remove b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_khorana, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:54:42 np0005603435 systemd[1]: libpod-conmon-b8e5ae7de209648e2860a63c7f165fa7987f03690177647385ff5e321695c29e.scope: Deactivated successfully.
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.472 239942 DEBUG oslo_concurrency.processutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697/disk.config 7556d66b-f5c2-4050-9684-0e513ae8c697_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.473 239942 INFO nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Deleting local config drive /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697/disk.config because it was imported into RBD.#033[00m
Jan 30 23:54:42 np0005603435 kernel: tap1df8885b-d7: entered promiscuous mode
Jan 30 23:54:42 np0005603435 NetworkManager[49097]: <info>  [1769835282.5211] manager: (tap1df8885b-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Jan 30 23:54:42 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:42Z|00180|binding|INFO|Claiming lport 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 for this chassis.
Jan 30 23:54:42 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:42Z|00181|binding|INFO|1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3: Claiming fa:16:3e:8e:64:9e 10.100.0.10
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.522 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:42 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:42Z|00182|binding|INFO|Setting lport 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 ovn-installed in OVS
Jan 30 23:54:42 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:42Z|00183|binding|INFO|Setting lport 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 up in Southbound
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.531 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.532 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:64:9e 10.100.0.10'], port_security=['fa:16:3e:8e:64:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7556d66b-f5c2-4050-9684-0e513ae8c697', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a10d9666-b672-4619-83b7-22dc781b5b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b39f0e168b54a4b8f976894d21361e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff571068-2221-49e0-84fe-8c4b85bf5ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21f14c68-4084-427c-b05e-592b1db029c6, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.532 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.535 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 in datapath a10d9666-b672-4619-83b7-22dc781b5b5b bound to our chassis#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.539 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a10d9666-b672-4619-83b7-22dc781b5b5b#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.550 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8eca410c-64f4-418a-8482-439fb963384b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.551 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa10d9666-b1 in ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:54:42 np0005603435 systemd-machined[208030]: New machine qemu-18-instance-00000012.
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.553 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa10d9666-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.553 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[de0cbf7c-35f9-4ef0-94e9-27c95ff07156]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.554 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[92710126-a63b-4d95-bc1c-99cc3ab94a78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 systemd-udevd[263855]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.563 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[d52df23e-1e23-4e83-9662-16dcd435e947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Jan 30 23:54:42 np0005603435 NetworkManager[49097]: <info>  [1769835282.5727] device (tap1df8885b-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:54:42 np0005603435 NetworkManager[49097]: <info>  [1769835282.5740] device (tap1df8885b-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.574 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6e113e9c-c4c1-4548-84fe-db4a03e2c225]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.591 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a5adccdd-24b3-4172-bc25-8f2ce2543ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.595 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5df33b68-0122-4672-b7ae-be529f6b930d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 NetworkManager[49097]: <info>  [1769835282.5995] manager: (tapa10d9666-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.621 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c935d5-3b46-498b-a87d-a01ffb3e0536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.626 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[255bc61a-6bb8-4c55-903d-a61b79ad7788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 NetworkManager[49097]: <info>  [1769835282.6474] device (tapa10d9666-b0): carrier: link connected
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.650 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac28c78-fe5d-4c57-8266-ca657b1b890f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.662 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[90401c27-dc7f-4975-8550-35eaa8f7509b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa10d9666-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c0:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433704, 'reachable_time': 24625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263887, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.676 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[63e0a13a-bd77-4ea4-ad73-f44119a3c407]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:c0da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433704, 'tstamp': 433704}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263888, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.686 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9e77962c-db89-4a23-a861-ff2363293753]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa10d9666-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c0:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433704, 'reachable_time': 24625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263890, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.707 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[354f9a7a-db2e-4479-a74f-aaa3f618f577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.729 239942 DEBUG nova.compute.manager [req-979c4657-7819-4682-82a9-e93ce2a5d464 req-268b263d-5f5a-41b7-913d-8a7448dbb4e7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received event network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.730 239942 DEBUG oslo_concurrency.lockutils [req-979c4657-7819-4682-82a9-e93ce2a5d464 req-268b263d-5f5a-41b7-913d-8a7448dbb4e7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.730 239942 DEBUG oslo_concurrency.lockutils [req-979c4657-7819-4682-82a9-e93ce2a5d464 req-268b263d-5f5a-41b7-913d-8a7448dbb4e7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.731 239942 DEBUG oslo_concurrency.lockutils [req-979c4657-7819-4682-82a9-e93ce2a5d464 req-268b263d-5f5a-41b7-913d-8a7448dbb4e7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.731 239942 DEBUG nova.compute.manager [req-979c4657-7819-4682-82a9-e93ce2a5d464 req-268b263d-5f5a-41b7-913d-8a7448dbb4e7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Processing event network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.748 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1d16d321-7964-4ff8-b370-7713087d0523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.749 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa10d9666-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.749 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.750 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa10d9666-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.751 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:42 np0005603435 kernel: tapa10d9666-b0: entered promiscuous mode
Jan 30 23:54:42 np0005603435 NetworkManager[49097]: <info>  [1769835282.7520] manager: (tapa10d9666-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.762 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa10d9666-b0, col_values=(('external_ids', {'iface-id': 'b5040674-bbd1-4dc9-b2e1-14712cb60315'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:54:42 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:42Z|00184|binding|INFO|Releasing lport b5040674-bbd1-4dc9-b2e1-14712cb60315 from this chassis (sb_readonly=0)
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.764 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.768 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.769 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d89cab40-63ad-4136-af29-b1d65bbc045c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.770 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-a10d9666-b672-4619-83b7-22dc781b5b5b
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID a10d9666-b672-4619-83b7-22dc781b5b5b
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:54:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:42.771 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'env', 'PROCESS_TAG=haproxy-a10d9666-b672-4619-83b7-22dc781b5b5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a10d9666-b672-4619-83b7-22dc781b5b5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:54:42 np0005603435 nova_compute[239938]: 2026-01-31 04:54:42.773 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:42 np0005603435 podman[263905]: 2026-01-31 04:54:42.777193578 +0000 UTC m=+0.045298753 container create 711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wilbur, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:54:42 np0005603435 systemd[1]: Started libpod-conmon-711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a.scope.
Jan 30 23:54:42 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:54:42 np0005603435 podman[263905]: 2026-01-31 04:54:42.840280307 +0000 UTC m=+0.108385472 container init 711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:54:42 np0005603435 podman[263905]: 2026-01-31 04:54:42.845376332 +0000 UTC m=+0.113481487 container start 711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:54:42 np0005603435 agitated_wilbur[263925]: 167 167
Jan 30 23:54:42 np0005603435 systemd[1]: libpod-711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a.scope: Deactivated successfully.
Jan 30 23:54:42 np0005603435 conmon[263925]: conmon 711dc61e9228b843d838 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a.scope/container/memory.events
Jan 30 23:54:42 np0005603435 podman[263905]: 2026-01-31 04:54:42.756308905 +0000 UTC m=+0.024414090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:54:42 np0005603435 podman[263905]: 2026-01-31 04:54:42.850830106 +0000 UTC m=+0.118935261 container attach 711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:54:42 np0005603435 podman[263905]: 2026-01-31 04:54:42.851335708 +0000 UTC m=+0.119440863 container died 711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:54:42 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a856b3114c4ec150d50883c093c79eb35fcd914e4b4169ec83f7e6b2ec457636-merged.mount: Deactivated successfully.
Jan 30 23:54:42 np0005603435 podman[263905]: 2026-01-31 04:54:42.895308638 +0000 UTC m=+0.163413813 container remove 711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:54:42 np0005603435 systemd[1]: libpod-conmon-711dc61e9228b843d8381778bb312daa76d39b22d54efa790e1c052cf689f33a.scope: Deactivated successfully.
Jan 30 23:54:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:43 np0005603435 podman[264000]: 2026-01-31 04:54:43.043524627 +0000 UTC m=+0.033397631 container create 0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 30 23:54:43 np0005603435 systemd[1]: Started libpod-conmon-0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb.scope.
Jan 30 23:54:43 np0005603435 podman[264019]: 2026-01-31 04:54:43.095552454 +0000 UTC m=+0.058103867 container create b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 30 23:54:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:54:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccb6fbb84eece4ef9d07206c2275a9916c57ac1a4c454ceee703bb57e504309a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccb6fbb84eece4ef9d07206c2275a9916c57ac1a4c454ceee703bb57e504309a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccb6fbb84eece4ef9d07206c2275a9916c57ac1a4c454ceee703bb57e504309a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccb6fbb84eece4ef9d07206c2275a9916c57ac1a4c454ceee703bb57e504309a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:43 np0005603435 podman[264000]: 2026-01-31 04:54:43.124095405 +0000 UTC m=+0.113968429 container init 0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:54:43 np0005603435 podman[264000]: 2026-01-31 04:54:43.029684807 +0000 UTC m=+0.019557841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:54:43 np0005603435 podman[264000]: 2026-01-31 04:54:43.137130435 +0000 UTC m=+0.127003439 container start 0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 30 23:54:43 np0005603435 podman[264000]: 2026-01-31 04:54:43.140407095 +0000 UTC m=+0.130280099 container attach 0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:54:43 np0005603435 systemd[1]: Started libpod-conmon-b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc.scope.
Jan 30 23:54:43 np0005603435 podman[264019]: 2026-01-31 04:54:43.059388126 +0000 UTC m=+0.021939519 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:54:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:54:43 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13bf259a8369c763ae16107b7e1af46eb14f7900586702e98e9ec5bcf7e14484/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:43 np0005603435 podman[264019]: 2026-01-31 04:54:43.199807113 +0000 UTC m=+0.162358516 container init b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 30 23:54:43 np0005603435 podman[264019]: 2026-01-31 04:54:43.206399375 +0000 UTC m=+0.168950748 container start b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:54:43 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[264042]: [NOTICE]   (264046) : New worker (264048) forked
Jan 30 23:54:43 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[264042]: [NOTICE]   (264046) : Loading success.
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]: {
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:    "0": [
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:        {
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "devices": [
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "/dev/loop3"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            ],
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_name": "ceph_lv0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_size": "21470642176",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "name": "ceph_lv0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "tags": {
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cluster_name": "ceph",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.crush_device_class": "",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.encrypted": "0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.objectstore": "bluestore",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osd_id": "0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.type": "block",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.vdo": "0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.with_tpm": "0"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            },
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "type": "block",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "vg_name": "ceph_vg0"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:        }
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:    ],
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:    "1": [
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:        {
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "devices": [
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "/dev/loop4"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            ],
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_name": "ceph_lv1",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_size": "21470642176",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "name": "ceph_lv1",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "tags": {
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cluster_name": "ceph",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.crush_device_class": "",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.encrypted": "0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.objectstore": "bluestore",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osd_id": "1",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.type": "block",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.vdo": "0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.with_tpm": "0"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            },
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "type": "block",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "vg_name": "ceph_vg1"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:        }
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:    ],
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:    "2": [
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:        {
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "devices": [
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "/dev/loop5"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            ],
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_name": "ceph_lv2",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_size": "21470642176",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "name": "ceph_lv2",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "tags": {
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.cluster_name": "ceph",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.crush_device_class": "",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.encrypted": "0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.objectstore": "bluestore",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osd_id": "2",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.type": "block",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.vdo": "0",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:                "ceph.with_tpm": "0"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            },
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "type": "block",
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:            "vg_name": "ceph_vg2"
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:        }
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]:    ]
Jan 30 23:54:43 np0005603435 hungry_khayyam[264034]: }
Jan 30 23:54:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Jan 30 23:54:43 np0005603435 systemd[1]: libpod-0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb.scope: Deactivated successfully.
Jan 30 23:54:43 np0005603435 podman[264000]: 2026-01-31 04:54:43.431615044 +0000 UTC m=+0.421488078 container died 0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:54:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ccb6fbb84eece4ef9d07206c2275a9916c57ac1a4c454ceee703bb57e504309a-merged.mount: Deactivated successfully.
Jan 30 23:54:43 np0005603435 podman[264000]: 2026-01-31 04:54:43.480920204 +0000 UTC m=+0.470793238 container remove 0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:54:43 np0005603435 systemd[1]: libpod-conmon-0b04fb47e0957cb717f90beb643c0f0a4e7b9405f2243cc09224e57a1a322fdb.scope: Deactivated successfully.
Jan 30 23:54:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:54:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2862749197' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:54:43 np0005603435 podman[264134]: 2026-01-31 04:54:43.923334806 +0000 UTC m=+0.047518157 container create 6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:54:43 np0005603435 systemd[1]: Started libpod-conmon-6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023.scope.
Jan 30 23:54:43 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:54:43 np0005603435 podman[264134]: 2026-01-31 04:54:43.899932872 +0000 UTC m=+0.024116313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:54:44 np0005603435 podman[264134]: 2026-01-31 04:54:44.008358004 +0000 UTC m=+0.132541375 container init 6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:54:44 np0005603435 podman[264134]: 2026-01-31 04:54:44.017279773 +0000 UTC m=+0.141463124 container start 6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 30 23:54:44 np0005603435 podman[264134]: 2026-01-31 04:54:44.022094681 +0000 UTC m=+0.146278082 container attach 6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swanson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 30 23:54:44 np0005603435 dreamy_swanson[264150]: 167 167
Jan 30 23:54:44 np0005603435 systemd[1]: libpod-6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023.scope: Deactivated successfully.
Jan 30 23:54:44 np0005603435 podman[264134]: 2026-01-31 04:54:44.025192567 +0000 UTC m=+0.149375948 container died 6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swanson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:54:44 np0005603435 systemd[1]: var-lib-containers-storage-overlay-131755ae59d92885d54b9a6ac8ed97805f8c52f0b2ac2ba2b14fdae26d90c4f1-merged.mount: Deactivated successfully.
Jan 30 23:54:44 np0005603435 podman[264134]: 2026-01-31 04:54:44.062950854 +0000 UTC m=+0.187134215 container remove 6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:54:44 np0005603435 systemd[1]: libpod-conmon-6ad781fd9ab535733b5f56a643a9d25e452fa1c64722c6ea63abc5ba71ff3023.scope: Deactivated successfully.
Jan 30 23:54:44 np0005603435 podman[264174]: 2026-01-31 04:54:44.244699996 +0000 UTC m=+0.056532479 container create 0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 30 23:54:44 np0005603435 systemd[1]: Started libpod-conmon-0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1.scope.
Jan 30 23:54:44 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:54:44 np0005603435 podman[264174]: 2026-01-31 04:54:44.223045005 +0000 UTC m=+0.034877568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:54:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf094cf00501aaddd5424dffd67aa1f51b24b1c6cae9faba028346a98ecd4023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf094cf00501aaddd5424dffd67aa1f51b24b1c6cae9faba028346a98ecd4023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf094cf00501aaddd5424dffd67aa1f51b24b1c6cae9faba028346a98ecd4023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:44 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf094cf00501aaddd5424dffd67aa1f51b24b1c6cae9faba028346a98ecd4023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:54:44 np0005603435 podman[264174]: 2026-01-31 04:54:44.33892987 +0000 UTC m=+0.150762373 container init 0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:54:44 np0005603435 podman[264174]: 2026-01-31 04:54:44.34708785 +0000 UTC m=+0.158920323 container start 0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:54:44 np0005603435 podman[264174]: 2026-01-31 04:54:44.350972225 +0000 UTC m=+0.162804728 container attach 0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 30 23:54:44 np0005603435 nova_compute[239938]: 2026-01-31 04:54:44.819 239942 DEBUG nova.compute.manager [req-990f2db6-23bd-4f8f-bb6c-e4de7966147c req-ab4f177b-5fcd-4c05-b858-c3fe31278cbb c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received event network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:44 np0005603435 nova_compute[239938]: 2026-01-31 04:54:44.821 239942 DEBUG oslo_concurrency.lockutils [req-990f2db6-23bd-4f8f-bb6c-e4de7966147c req-ab4f177b-5fcd-4c05-b858-c3fe31278cbb c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:44 np0005603435 nova_compute[239938]: 2026-01-31 04:54:44.822 239942 DEBUG oslo_concurrency.lockutils [req-990f2db6-23bd-4f8f-bb6c-e4de7966147c req-ab4f177b-5fcd-4c05-b858-c3fe31278cbb c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:44 np0005603435 nova_compute[239938]: 2026-01-31 04:54:44.822 239942 DEBUG oslo_concurrency.lockutils [req-990f2db6-23bd-4f8f-bb6c-e4de7966147c req-ab4f177b-5fcd-4c05-b858-c3fe31278cbb c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:44 np0005603435 nova_compute[239938]: 2026-01-31 04:54:44.822 239942 DEBUG nova.compute.manager [req-990f2db6-23bd-4f8f-bb6c-e4de7966147c req-ab4f177b-5fcd-4c05-b858-c3fe31278cbb c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] No waiting events found dispatching network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:54:44 np0005603435 nova_compute[239938]: 2026-01-31 04:54:44.822 239942 WARNING nova.compute.manager [req-990f2db6-23bd-4f8f-bb6c-e4de7966147c req-ab4f177b-5fcd-4c05-b858-c3fe31278cbb c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received unexpected event network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 for instance with vm_state building and task_state spawning.#033[00m
Jan 30 23:54:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Jan 30 23:54:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Jan 30 23:54:44 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Jan 30 23:54:44 np0005603435 lvm[264267]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:54:44 np0005603435 lvm[264267]: VG ceph_vg0 finished
Jan 30 23:54:44 np0005603435 lvm[264269]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:54:44 np0005603435 lvm[264269]: VG ceph_vg1 finished
Jan 30 23:54:44 np0005603435 lvm[264270]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:54:44 np0005603435 lvm[264270]: VG ceph_vg2 finished
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.014 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:45 np0005603435 exciting_carson[264192]: {}
Jan 30 23:54:45 np0005603435 systemd[1]: libpod-0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1.scope: Deactivated successfully.
Jan 30 23:54:45 np0005603435 systemd[1]: libpod-0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1.scope: Consumed 1.050s CPU time.
Jan 30 23:54:45 np0005603435 podman[264174]: 2026-01-31 04:54:45.118674943 +0000 UTC m=+0.930507426 container died 0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:54:45 np0005603435 systemd[1]: var-lib-containers-storage-overlay-cf094cf00501aaddd5424dffd67aa1f51b24b1c6cae9faba028346a98ecd4023-merged.mount: Deactivated successfully.
Jan 30 23:54:45 np0005603435 podman[264174]: 2026-01-31 04:54:45.161547866 +0000 UTC m=+0.973380339 container remove 0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_carson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:54:45 np0005603435 systemd[1]: libpod-conmon-0c5ebfcdc4c4c21e9471edcddbacbb33b097c08b7b2d3ab95796c028c9d3b6c1.scope: Deactivated successfully.
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.369 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835285.3684518, 7556d66b-f5c2-4050-9684-0e513ae8c697 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.370 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] VM Started (Lifecycle Event)#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.373 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.378 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.383 239942 INFO nova.virt.libvirt.driver [-] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Instance spawned successfully.#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.383 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.396 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.401 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:54:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 18 KiB/s wr, 8 op/s
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.413 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.413 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.414 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.415 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.415 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.416 239942 DEBUG nova.virt.libvirt.driver [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.424 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.425 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835285.3696337, 7556d66b-f5c2-4050-9684-0e513ae8c697 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.425 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.463 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.468 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835285.3779817, 7556d66b-f5c2-4050-9684-0e513ae8c697 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.469 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.495 239942 INFO nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Took 7.28 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.496 239942 DEBUG nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.512 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.516 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.553 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.597 239942 INFO nova.compute.manager [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Took 10.65 seconds to build instance.#033[00m
Jan 30 23:54:45 np0005603435 nova_compute[239938]: 2026-01-31 04:54:45.614 239942 DEBUG oslo_concurrency.lockutils [None req-2ec6d8e2-9e4f-4663-a06a-79f33ce394cb 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:54:45 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:54:46 np0005603435 nova_compute[239938]: 2026-01-31 04:54:46.741 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Jan 30 23:54:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Jan 30 23:54:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Jan 30 23:54:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 51 KiB/s wr, 121 op/s
Jan 30 23:54:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:49 np0005603435 nova_compute[239938]: 2026-01-31 04:54:49.377 239942 DEBUG nova.compute.manager [req-a1ee9efb-5fca-4d01-bae6-42c1a99b4b81 req-0f7e8cd3-f833-4c4f-8a75-38d31429500f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received event network-changed-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:54:49 np0005603435 nova_compute[239938]: 2026-01-31 04:54:49.377 239942 DEBUG nova.compute.manager [req-a1ee9efb-5fca-4d01-bae6-42c1a99b4b81 req-0f7e8cd3-f833-4c4f-8a75-38d31429500f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Refreshing instance network info cache due to event network-changed-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:54:49 np0005603435 nova_compute[239938]: 2026-01-31 04:54:49.378 239942 DEBUG oslo_concurrency.lockutils [req-a1ee9efb-5fca-4d01-bae6-42c1a99b4b81 req-0f7e8cd3-f833-4c4f-8a75-38d31429500f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:54:49 np0005603435 nova_compute[239938]: 2026-01-31 04:54:49.378 239942 DEBUG oslo_concurrency.lockutils [req-a1ee9efb-5fca-4d01-bae6-42c1a99b4b81 req-0f7e8cd3-f833-4c4f-8a75-38d31429500f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:54:49 np0005603435 nova_compute[239938]: 2026-01-31 04:54:49.378 239942 DEBUG nova.network.neutron [req-a1ee9efb-5fca-4d01-bae6-42c1a99b4b81 req-0f7e8cd3-f833-4c4f-8a75-38d31429500f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Refreshing network info cache for port 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:54:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 48 KiB/s wr, 119 op/s
Jan 30 23:54:50 np0005603435 nova_compute[239938]: 2026-01-31 04:54:50.017 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:54:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/595986267' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:54:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Jan 30 23:54:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Jan 30 23:54:50 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Jan 30 23:54:50 np0005603435 nova_compute[239938]: 2026-01-31 04:54:50.493 239942 DEBUG nova.network.neutron [req-a1ee9efb-5fca-4d01-bae6-42c1a99b4b81 req-0f7e8cd3-f833-4c4f-8a75-38d31429500f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Updated VIF entry in instance network info cache for port 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:54:50 np0005603435 nova_compute[239938]: 2026-01-31 04:54:50.493 239942 DEBUG nova.network.neutron [req-a1ee9efb-5fca-4d01-bae6-42c1a99b4b81 req-0f7e8cd3-f833-4c4f-8a75-38d31429500f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Updating instance_info_cache with network_info: [{"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:54:50 np0005603435 nova_compute[239938]: 2026-01-31 04:54:50.512 239942 DEBUG oslo_concurrency.lockutils [req-a1ee9efb-5fca-4d01-bae6-42c1a99b4b81 req-0f7e8cd3-f833-4c4f-8a75-38d31429500f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-7556d66b-f5c2-4050-9684-0e513ae8c697" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:54:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 176 op/s
Jan 30 23:54:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Jan 30 23:54:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Jan 30 23:54:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Jan 30 23:54:51 np0005603435 nova_compute[239938]: 2026-01-31 04:54:51.743 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 11 KiB/s wr, 184 op/s
Jan 30 23:54:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Jan 30 23:54:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Jan 30 23:54:54 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Jan 30 23:54:55 np0005603435 nova_compute[239938]: 2026-01-31 04:54:55.056 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 165 op/s
Jan 30 23:54:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:55.919 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:54:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:55.920 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:54:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:54:55.921 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:54:56 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:56Z|00030|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.10
Jan 30 23:54:56 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:56Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:8e:64:9e 10.100.0.10
Jan 30 23:54:56 np0005603435 nova_compute[239938]: 2026-01-31 04:54:56.745 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:54:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 933 KiB/s rd, 23 KiB/s wr, 158 op/s
Jan 30 23:54:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:54:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Jan 30 23:54:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Jan 30 23:54:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Jan 30 23:54:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:54:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3890986238' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:54:59 np0005603435 podman[264317]: 2026-01-31 04:54:59.099435436 +0000 UTC m=+0.057955444 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 30 23:54:59 np0005603435 podman[264318]: 2026-01-31 04:54:59.127928055 +0000 UTC m=+0.088165836 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 30 23:54:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 19 KiB/s wr, 111 op/s
Jan 30 23:54:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Jan 30 23:54:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Jan 30 23:54:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Jan 30 23:54:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:59Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.10
Jan 30 23:54:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:54:59Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:8e:64:9e 10.100.0.10
Jan 30 23:55:00 np0005603435 nova_compute[239938]: 2026-01-31 04:55:00.060 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Jan 30 23:55:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Jan 30 23:55:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Jan 30 23:55:01 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:01Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:64:9e 10.100.0.10
Jan 30 23:55:01 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:01Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:64:9e 10.100.0.10
Jan 30 23:55:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 827 KiB/s rd, 24 KiB/s wr, 94 op/s
Jan 30 23:55:01 np0005603435 nova_compute[239938]: 2026-01-31 04:55:01.789 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:55:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1605197635' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:55:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Jan 30 23:55:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Jan 30 23:55:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Jan 30 23:55:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 35 KiB/s wr, 78 op/s
Jan 30 23:55:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Jan 30 23:55:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Jan 30 23:55:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Jan 30 23:55:05 np0005603435 nova_compute[239938]: 2026-01-31 04:55:05.063 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 350 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 37 KiB/s wr, 78 op/s
Jan 30 23:55:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:55:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1746750049' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:55:06
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.control']
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:55:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Jan 30 23:55:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Jan 30 23:55:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Jan 30 23:55:06 np0005603435 nova_compute[239938]: 2026-01-31 04:55:06.791 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:55:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:55:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 21 KiB/s wr, 120 op/s
Jan 30 23:55:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Jan 30 23:55:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Jan 30 23:55:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Jan 30 23:55:07 np0005603435 nova_compute[239938]: 2026-01-31 04:55:07.771 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "52b7c210-2041-4375-8361-693e4d450c12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:07 np0005603435 nova_compute[239938]: 2026-01-31 04:55:07.771 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:07 np0005603435 nova_compute[239938]: 2026-01-31 04:55:07.787 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:55:07 np0005603435 nova_compute[239938]: 2026-01-31 04:55:07.873 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:07 np0005603435 nova_compute[239938]: 2026-01-31 04:55:07.873 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:07 np0005603435 nova_compute[239938]: 2026-01-31 04:55:07.883 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:55:07 np0005603435 nova_compute[239938]: 2026-01-31 04:55:07.884 239942 INFO nova.compute.claims [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:55:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:55:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:55:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:55:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:55:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:55:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.068 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:55:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:55:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:55:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:55:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:55:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:55:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829180217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.618 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.625 239942 DEBUG nova.compute.provider_tree [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.640 239942 DEBUG nova.scheduler.client.report [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.659 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.660 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.702 239942 INFO nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.706 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.706 239942 DEBUG nova.network.neutron [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.724 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:55:08 np0005603435 nova_compute[239938]: 2026-01-31 04:55:08.762 239942 INFO nova.virt.block_device [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Booting with volume snapshot 4c95d0d3-9f05-4916-9809-221e34446493 at /dev/vda#033[00m
Jan 30 23:55:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:55:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4065883444' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:55:09 np0005603435 nova_compute[239938]: 2026-01-31 04:55:09.254 239942 DEBUG nova.policy [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e10f13b98624406985dec6a5dcc391c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:55:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 9.2 KiB/s wr, 63 op/s
Jan 30 23:55:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Jan 30 23:55:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Jan 30 23:55:09 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Jan 30 23:55:10 np0005603435 nova_compute[239938]: 2026-01-31 04:55:10.066 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:10 np0005603435 nova_compute[239938]: 2026-01-31 04:55:10.209 239942 DEBUG nova.network.neutron [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Successfully created port: 8495b99c-f86f-4ebe-8135-5c903d896bc1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:55:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Jan 30 23:55:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Jan 30 23:55:10 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.133 239942 DEBUG nova.network.neutron [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Successfully updated port: 8495b99c-f86f-4ebe-8135-5c903d896bc1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.150 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.150 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquired lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.151 239942 DEBUG nova.network.neutron [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.221 239942 DEBUG nova.compute.manager [req-8dfda691-720e-44b2-b0b4-8de70c182f16 req-640ff8fe-eed1-4778-bfa9-6d87174852c6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received event network-changed-8495b99c-f86f-4ebe-8135-5c903d896bc1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.221 239942 DEBUG nova.compute.manager [req-8dfda691-720e-44b2-b0b4-8de70c182f16 req-640ff8fe-eed1-4778-bfa9-6d87174852c6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Refreshing instance network info cache due to event network-changed-8495b99c-f86f-4ebe-8135-5c903d896bc1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.222 239942 DEBUG oslo_concurrency.lockutils [req-8dfda691-720e-44b2-b0b4-8de70c182f16 req-640ff8fe-eed1-4778-bfa9-6d87174852c6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.303 239942 DEBUG nova.network.neutron [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:55:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 18 KiB/s wr, 54 op/s
Jan 30 23:55:11 np0005603435 nova_compute[239938]: 2026-01-31 04:55:11.793 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.027 239942 DEBUG nova.network.neutron [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Updating instance_info_cache with network_info: [{"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.053 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Releasing lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.054 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Instance network_info: |[{"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.054 239942 DEBUG oslo_concurrency.lockutils [req-8dfda691-720e-44b2-b0b4-8de70c182f16 req-640ff8fe-eed1-4778-bfa9-6d87174852c6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.055 239942 DEBUG nova.network.neutron [req-8dfda691-720e-44b2-b0b4-8de70c182f16 req-640ff8fe-eed1-4778-bfa9-6d87174852c6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Refreshing network info cache for port 8495b99c-f86f-4ebe-8135-5c903d896bc1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.854 239942 DEBUG os_brick.utils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.855 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.867 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.867 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[d2eb658f-6b64-41bc-83f4-37c875e8e495]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.868 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.875 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.875 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[47d51fff-5603-46bf-8f5f-b8be26d88351]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.877 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.886 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.886 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cc79e3-a2e6-43b5-9a9d-8f8e102be8d2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.887 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[cdafc533-2281-441d-9d94-85e8347fecb7]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.888 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.909 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.911 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.911 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.912 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.912 239942 DEBUG os_brick.utils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:55:12 np0005603435 nova_compute[239938]: 2026-01-31 04:55:12.912 239942 DEBUG nova.virt.block_device [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Updating existing volume attachment record: e55b70bf-5c31-4ead-baf0-c7d73d4c7eb9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:55:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Jan 30 23:55:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Jan 30 23:55:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Jan 30 23:55:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 15 KiB/s wr, 99 op/s
Jan 30 23:55:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:55:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1577723119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.581 239942 DEBUG nova.network.neutron [req-8dfda691-720e-44b2-b0b4-8de70c182f16 req-640ff8fe-eed1-4778-bfa9-6d87174852c6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Updated VIF entry in instance network info cache for port 8495b99c-f86f-4ebe-8135-5c903d896bc1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.581 239942 DEBUG nova.network.neutron [req-8dfda691-720e-44b2-b0b4-8de70c182f16 req-640ff8fe-eed1-4778-bfa9-6d87174852c6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Updating instance_info_cache with network_info: [{"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.597 239942 DEBUG oslo_concurrency.lockutils [req-8dfda691-720e-44b2-b0b4-8de70c182f16 req-640ff8fe-eed1-4778-bfa9-6d87174852c6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.845 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.848 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.848 239942 INFO nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Creating image(s)#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.849 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.849 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Ensure instance console log exists: /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.850 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.850 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.850 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.853 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Start _get_guest_xml network_info=[{"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-01-31T04:54:59Z,direct_url=<?>,disk_format='qcow2',id=abcc362a-746d-4429-8460-d5477e9109d0,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-698609801',owner='e332802dd6cf49c59f8ed38e70addb0e',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-01-31T04:55:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'attachment_id': 'e55b70bf-5c31-4ead-baf0-c7d73d4c7eb9', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-efb11444-7e28-4080-bd22-6f436b9dbf14', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'efb11444-7e28-4080-bd22-6f436b9dbf14', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '52b7c210-2041-4375-8361-693e4d450c12', 'attached_at': '', 'detached_at': '', 'volume_id': 'efb11444-7e28-4080-bd22-6f436b9dbf14', 'serial': 'efb11444-7e28-4080-bd22-6f436b9dbf14'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.859 239942 WARNING nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.864 239942 DEBUG nova.virt.libvirt.host [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.865 239942 DEBUG nova.virt.libvirt.host [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.868 239942 DEBUG nova.virt.libvirt.host [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.868 239942 DEBUG nova.virt.libvirt.host [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.869 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.869 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-01-31T04:54:59Z,direct_url=<?>,disk_format='qcow2',id=abcc362a-746d-4429-8460-d5477e9109d0,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-698609801',owner='e332802dd6cf49c59f8ed38e70addb0e',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-01-31T04:55:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.870 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.870 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.870 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.871 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.871 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.871 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.871 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.872 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.872 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.872 239942 DEBUG nova.virt.hardware [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.900 239942 DEBUG nova.storage.rbd_utils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 52b7c210-2041-4375-8361-693e4d450c12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:55:13 np0005603435 nova_compute[239938]: 2026-01-31 04:55:13.905 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:55:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508371033' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.515 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.538 239942 DEBUG nova.virt.libvirt.vif [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:55:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-42036910',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-42036910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-42036910',id=19,image_ref='abcc362a-746d-4429-8460-d5477e9109d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM09tA+o/X5eAA7F61hj1SxI6ypExfxHwu84ZGI766nI5MvxnQexsz7kbcbQ7kayV4aYCWWp0LzpaRSvNR2iXXookyyAVTplj7M1+4fZNIZ0rEyvgKI3UsNIqdXjZGP7eQ==',key_name='tempest-keypair-792334322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-srbwo0np',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1782423025',image_owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:55:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e10f13b98624406985dec6a5dcc391c7',uuid=52b7c210-2041-4375-8361-693e4d450c12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.539 239942 DEBUG nova.network.os_vif_util [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.539 239942 DEBUG nova.network.os_vif_util [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:5a:23,bridge_name='br-int',has_traffic_filtering=True,id=8495b99c-f86f-4ebe-8135-5c903d896bc1,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8495b99c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.540 239942 DEBUG nova.objects.instance [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 52b7c210-2041-4375-8361-693e4d450c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.552 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <uuid>52b7c210-2041-4375-8361-693e4d450c12</uuid>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <name>instance-00000013</name>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-42036910</nova:name>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:55:13</nova:creationTime>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <nova:user uuid="e10f13b98624406985dec6a5dcc391c7">tempest-TestVolumeBootPattern-1782423025-project-member</nova:user>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <nova:project uuid="e332802dd6cf49c59f8ed38e70addb0e">tempest-TestVolumeBootPattern-1782423025</nova:project>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="abcc362a-746d-4429-8460-d5477e9109d0"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <nova:port uuid="8495b99c-f86f-4ebe-8135-5c903d896bc1">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <entry name="serial">52b7c210-2041-4375-8361-693e4d450c12</entry>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <entry name="uuid">52b7c210-2041-4375-8361-693e4d450c12</entry>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/52b7c210-2041-4375-8361-693e4d450c12_disk.config">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-efb11444-7e28-4080-bd22-6f436b9dbf14">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <serial>efb11444-7e28-4080-bd22-6f436b9dbf14</serial>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:42:5a:23"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <target dev="tap8495b99c-f8"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12/console.log" append="off"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <input type="keyboard" bus="usb"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:55:14 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:55:14 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:55:14 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:55:14 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.553 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Preparing to wait for external event network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.553 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "52b7c210-2041-4375-8361-693e4d450c12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.553 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.553 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.554 239942 DEBUG nova.virt.libvirt.vif [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:55:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-42036910',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-42036910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-42036910',id=19,image_ref='abcc362a-746d-4429-8460-d5477e9109d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM09tA+o/X5eAA7F61hj1SxI6ypExfxHwu84ZGI766nI5MvxnQexsz7kbcbQ7kayV4aYCWWp0LzpaRSvNR2iXXookyyAVTplj7M1+4fZNIZ0rEyvgKI3UsNIqdXjZGP7eQ==',key_name='tempest-keypair-792334322',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-srbwo0np',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1782423025',image_owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:55:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e10f13b98624406985dec6a5dcc391c7',uuid=52b7c210-2041-4375-8361-693e4d450c12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.554 239942 DEBUG nova.network.os_vif_util [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.555 239942 DEBUG nova.network.os_vif_util [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:5a:23,bridge_name='br-int',has_traffic_filtering=True,id=8495b99c-f86f-4ebe-8135-5c903d896bc1,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8495b99c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.555 239942 DEBUG os_vif [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:5a:23,bridge_name='br-int',has_traffic_filtering=True,id=8495b99c-f86f-4ebe-8135-5c903d896bc1,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8495b99c-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.556 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.556 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.556 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.558 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.559 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8495b99c-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.559 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8495b99c-f8, col_values=(('external_ids', {'iface-id': '8495b99c-f86f-4ebe-8135-5c903d896bc1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:5a:23', 'vm-uuid': '52b7c210-2041-4375-8361-693e4d450c12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.560 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:14 np0005603435 NetworkManager[49097]: <info>  [1769835314.5618] manager: (tap8495b99c-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.562 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.567 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.568 239942 INFO os_vif [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:5a:23,bridge_name='br-int',has_traffic_filtering=True,id=8495b99c-f86f-4ebe-8135-5c903d896bc1,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8495b99c-f8')#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.625 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.625 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.625 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No VIF found with MAC fa:16:3e:42:5a:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.626 239942 INFO nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Using config drive#033[00m
Jan 30 23:55:14 np0005603435 nova_compute[239938]: 2026-01-31 04:55:14.648 239942 DEBUG nova.storage.rbd_utils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 52b7c210-2041-4375-8361-693e4d450c12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.066 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Jan 30 23:55:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Jan 30 23:55:15 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Jan 30 23:55:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 28 KiB/s wr, 106 op/s
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.474 239942 INFO nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Creating config drive at /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12/disk.config#033[00m
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.483 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbgrc8jt7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.613 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbgrc8jt7" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.647 239942 DEBUG nova.storage.rbd_utils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 52b7c210-2041-4375-8361-693e4d450c12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.653 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12/disk.config 52b7c210-2041-4375-8361-693e4d450c12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.828 239942 DEBUG oslo_concurrency.processutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12/disk.config 52b7c210-2041-4375-8361-693e4d450c12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.829 239942 INFO nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Deleting local config drive /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12/disk.config because it was imported into RBD.#033[00m
Jan 30 23:55:15 np0005603435 kernel: tap8495b99c-f8: entered promiscuous mode
Jan 30 23:55:15 np0005603435 NetworkManager[49097]: <info>  [1769835315.8850] manager: (tap8495b99c-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.887 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:15 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:15Z|00185|binding|INFO|Claiming lport 8495b99c-f86f-4ebe-8135-5c903d896bc1 for this chassis.
Jan 30 23:55:15 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:15Z|00186|binding|INFO|8495b99c-f86f-4ebe-8135-5c903d896bc1: Claiming fa:16:3e:42:5a:23 10.100.0.11
Jan 30 23:55:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:15.895 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:5a:23 10.100.0.11'], port_security=['fa:16:3e:42:5a:23 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '52b7c210-2041-4375-8361-693e4d450c12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '553f37d1-f94c-4459-b208-0a6d3389632b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=8495b99c-f86f-4ebe-8135-5c903d896bc1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:55:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:15.897 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 8495b99c-f86f-4ebe-8135-5c903d896bc1 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 bound to our chassis#033[00m
Jan 30 23:55:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:15.900 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:55:15 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:15Z|00187|binding|INFO|Setting lport 8495b99c-f86f-4ebe-8135-5c903d896bc1 ovn-installed in OVS
Jan 30 23:55:15 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:15Z|00188|binding|INFO|Setting lport 8495b99c-f86f-4ebe-8135-5c903d896bc1 up in Southbound
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.907 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:15 np0005603435 nova_compute[239938]: 2026-01-31 04:55:15.915 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:15.916 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[04dc5e36-b08e-43cc-86f5-25a9f84dde67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:15 np0005603435 systemd-machined[208030]: New machine qemu-19-instance-00000013.
Jan 30 23:55:15 np0005603435 systemd-udevd[264512]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:55:15 np0005603435 NetworkManager[49097]: <info>  [1769835315.9408] device (tap8495b99c-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:55:15 np0005603435 NetworkManager[49097]: <info>  [1769835315.9416] device (tap8495b99c-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:55:15 np0005603435 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Jan 30 23:55:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:15.949 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[da7e895a-d260-43cf-a563-514a59020859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:15.956 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[f39427da-83c1-4456-ac97-ba1cb76da916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:15.978 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[1061be97-6e86-4c1f-9060-fc50afea2708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:15 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:15.996 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c7eb58fe-43fa-4cc3-adde-ce36ff694785]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431748, 'reachable_time': 30176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264520, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:16 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:16.014 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb20b07-43dd-4ac4-8f5c-bc7a68fa72dc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b0cf2db-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431757, 'tstamp': 431757}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264524, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b0cf2db-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431760, 'tstamp': 431760}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264524, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:16 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:16.016 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.018 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.019 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:16 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:16.023 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:16 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:16.023 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:55:16 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:16.024 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:16 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:16.025 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.328 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835316.327659, 52b7c210-2041-4375-8361-693e4d450c12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.329 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] VM Started (Lifecycle Event)#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.354 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.358 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835316.3279512, 52b7c210-2041-4375-8361-693e4d450c12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.359 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.378 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.383 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.405 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.410 239942 DEBUG nova.compute.manager [req-35e31319-fd4e-4723-b21f-aca65c9bb3e6 req-23070547-af64-45f4-9458-e8eda672444d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received event network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.411 239942 DEBUG oslo_concurrency.lockutils [req-35e31319-fd4e-4723-b21f-aca65c9bb3e6 req-23070547-af64-45f4-9458-e8eda672444d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "52b7c210-2041-4375-8361-693e4d450c12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.411 239942 DEBUG oslo_concurrency.lockutils [req-35e31319-fd4e-4723-b21f-aca65c9bb3e6 req-23070547-af64-45f4-9458-e8eda672444d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.412 239942 DEBUG oslo_concurrency.lockutils [req-35e31319-fd4e-4723-b21f-aca65c9bb3e6 req-23070547-af64-45f4-9458-e8eda672444d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.412 239942 DEBUG nova.compute.manager [req-35e31319-fd4e-4723-b21f-aca65c9bb3e6 req-23070547-af64-45f4-9458-e8eda672444d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Processing event network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.414 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.418 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835316.4184666, 52b7c210-2041-4375-8361-693e4d450c12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.419 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.421 239942 DEBUG nova.virt.libvirt.driver [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.424 239942 INFO nova.virt.libvirt.driver [-] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Instance spawned successfully.#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.425 239942 INFO nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Took 2.58 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.425 239942 DEBUG nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.437 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.441 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.469 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.509 239942 INFO nova.compute.manager [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Took 8.67 seconds to build instance.#033[00m
Jan 30 23:55:16 np0005603435 nova_compute[239938]: 2026-01-31 04:55:16.530 239942 DEBUG oslo_concurrency.lockutils [None req-1a3c078f-85cb-41e6-a17f-6856c5a1174a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Jan 30 23:55:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Jan 30 23:55:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.099310388879912e-05 of space, bias 1.0, pg target 0.003297931166639736 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0036585782898072424 of space, bias 1.0, pg target 1.0975734869421727 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.517964222937766e-06 of space, bias 1.0, pg target 0.000453871302658392 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668614853699164 of space, bias 1.0, pg target 0.199391584125605 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.915644726235008e-07 of space, bias 4.0, pg target 0.000827111109257707 quantized to 16 (current 16)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 30 23:55:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 58 KiB/s wr, 161 op/s
Jan 30 23:55:17 np0005603435 nova_compute[239938]: 2026-01-31 04:55:17.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.058 239942 DEBUG nova.compute.manager [req-dabbda92-4054-4ec9-b487-4ba7dca04adb req-222fcb26-de21-439f-a9e8-998d76e21457 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received event network-changed-8495b99c-f86f-4ebe-8135-5c903d896bc1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.058 239942 DEBUG nova.compute.manager [req-dabbda92-4054-4ec9-b487-4ba7dca04adb req-222fcb26-de21-439f-a9e8-998d76e21457 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Refreshing instance network info cache due to event network-changed-8495b99c-f86f-4ebe-8135-5c903d896bc1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.059 239942 DEBUG oslo_concurrency.lockutils [req-dabbda92-4054-4ec9-b487-4ba7dca04adb req-222fcb26-de21-439f-a9e8-998d76e21457 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.059 239942 DEBUG oslo_concurrency.lockutils [req-dabbda92-4054-4ec9-b487-4ba7dca04adb req-222fcb26-de21-439f-a9e8-998d76e21457 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.059 239942 DEBUG nova.network.neutron [req-dabbda92-4054-4ec9-b487-4ba7dca04adb req-222fcb26-de21-439f-a9e8-998d76e21457 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Refreshing network info cache for port 8495b99c-f86f-4ebe-8135-5c903d896bc1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:55:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Jan 30 23:55:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Jan 30 23:55:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.486 239942 DEBUG nova.compute.manager [req-b0fb5edf-bf8e-41d7-9955-6ec9f75b25f8 req-88c811f0-9dd0-4be1-9541-c78ec0099e19 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received event network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.487 239942 DEBUG oslo_concurrency.lockutils [req-b0fb5edf-bf8e-41d7-9955-6ec9f75b25f8 req-88c811f0-9dd0-4be1-9541-c78ec0099e19 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "52b7c210-2041-4375-8361-693e4d450c12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.487 239942 DEBUG oslo_concurrency.lockutils [req-b0fb5edf-bf8e-41d7-9955-6ec9f75b25f8 req-88c811f0-9dd0-4be1-9541-c78ec0099e19 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.488 239942 DEBUG oslo_concurrency.lockutils [req-b0fb5edf-bf8e-41d7-9955-6ec9f75b25f8 req-88c811f0-9dd0-4be1-9541-c78ec0099e19 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.488 239942 DEBUG nova.compute.manager [req-b0fb5edf-bf8e-41d7-9955-6ec9f75b25f8 req-88c811f0-9dd0-4be1-9541-c78ec0099e19 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] No waiting events found dispatching network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.488 239942 WARNING nova.compute.manager [req-b0fb5edf-bf8e-41d7-9955-6ec9f75b25f8 req-88c811f0-9dd0-4be1-9541-c78ec0099e19 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received unexpected event network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:55:18 np0005603435 nova_compute[239938]: 2026-01-31 04:55:18.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:19 np0005603435 nova_compute[239938]: 2026-01-31 04:55:19.104 239942 DEBUG nova.network.neutron [req-dabbda92-4054-4ec9-b487-4ba7dca04adb req-222fcb26-de21-439f-a9e8-998d76e21457 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Updated VIF entry in instance network info cache for port 8495b99c-f86f-4ebe-8135-5c903d896bc1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:55:19 np0005603435 nova_compute[239938]: 2026-01-31 04:55:19.104 239942 DEBUG nova.network.neutron [req-dabbda92-4054-4ec9-b487-4ba7dca04adb req-222fcb26-de21-439f-a9e8-998d76e21457 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Updating instance_info_cache with network_info: [{"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:55:19 np0005603435 nova_compute[239938]: 2026-01-31 04:55:19.126 239942 DEBUG oslo_concurrency.lockutils [req-dabbda92-4054-4ec9-b487-4ba7dca04adb req-222fcb26-de21-439f-a9e8-998d76e21457 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-52b7c210-2041-4375-8361-693e4d450c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:55:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Jan 30 23:55:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Jan 30 23:55:19 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Jan 30 23:55:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 465 KiB/s rd, 75 KiB/s wr, 108 op/s
Jan 30 23:55:19 np0005603435 nova_compute[239938]: 2026-01-31 04:55:19.560 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:19 np0005603435 nova_compute[239938]: 2026-01-31 04:55:19.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:19 np0005603435 nova_compute[239938]: 2026-01-31 04:55:19.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:19 np0005603435 nova_compute[239938]: 2026-01-31 04:55:19.886 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:55:20 np0005603435 nova_compute[239938]: 2026-01-31 04:55:20.068 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Jan 30 23:55:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Jan 30 23:55:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Jan 30 23:55:20 np0005603435 nova_compute[239938]: 2026-01-31 04:55:20.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Jan 30 23:55:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Jan 30 23:55:21 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.419 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "7556d66b-f5c2-4050-9684-0e513ae8c697" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.420 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.421 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.421 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.421 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.423 239942 INFO nova.compute.manager [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Terminating instance#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.424 239942 DEBUG nova.compute.manager [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:55:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1023 B/s wr, 104 op/s
Jan 30 23:55:21 np0005603435 kernel: tap1df8885b-d7 (unregistering): left promiscuous mode
Jan 30 23:55:21 np0005603435 NetworkManager[49097]: <info>  [1769835321.4775] device (tap1df8885b-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:55:21 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:21Z|00189|binding|INFO|Releasing lport 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 from this chassis (sb_readonly=0)
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.484 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:21Z|00190|binding|INFO|Setting lport 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 down in Southbound
Jan 30 23:55:21 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:21Z|00191|binding|INFO|Removing iface tap1df8885b-d7 ovn-installed in OVS
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.491 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.502 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:64:9e 10.100.0.10'], port_security=['fa:16:3e:8e:64:9e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7556d66b-f5c2-4050-9684-0e513ae8c697', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a10d9666-b672-4619-83b7-22dc781b5b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b39f0e168b54a4b8f976894d21361e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff571068-2221-49e0-84fe-8c4b85bf5ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21f14c68-4084-427c-b05e-592b1db029c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.505 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 in datapath a10d9666-b672-4619-83b7-22dc781b5b5b unbound from our chassis#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.507 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.512 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a10d9666-b672-4619-83b7-22dc781b5b5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.514 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[371d8c4c-5039-4854-b685-0681351a6f52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.520 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b namespace which is not needed anymore#033[00m
Jan 30 23:55:21 np0005603435 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 30 23:55:21 np0005603435 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 14.727s CPU time.
Jan 30 23:55:21 np0005603435 systemd-machined[208030]: Machine qemu-18-instance-00000012 terminated.
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.642 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.648 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.659 239942 INFO nova.virt.libvirt.driver [-] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Instance destroyed successfully.#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.660 239942 DEBUG nova.objects.instance [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lazy-loading 'resources' on Instance uuid 7556d66b-f5c2-4050-9684-0e513ae8c697 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:55:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[264042]: [NOTICE]   (264046) : haproxy version is 2.8.14-c23fe91
Jan 30 23:55:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[264042]: [NOTICE]   (264046) : path to executable is /usr/sbin/haproxy
Jan 30 23:55:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[264042]: [WARNING]  (264046) : Exiting Master process...
Jan 30 23:55:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[264042]: [ALERT]    (264046) : Current worker (264048) exited with code 143 (Terminated)
Jan 30 23:55:21 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[264042]: [WARNING]  (264046) : All workers exited. Exiting... (0)
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.685 239942 DEBUG nova.virt.libvirt.vif [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:54:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1259685471',display_name='tempest-TransferEncryptedVolumeTest-server-1259685471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1259685471',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB828SO4KCiS/c6FYV17F5UX+BLYIRAc4CyTZA4fXDNG/eieZI8ChuIejzpTuF2CfgKMQEbMYMZVWf9xnEOSXNVsZsXIi11a3wsxGw0mmNb26j9vmggnToYyQthSze7emg==',key_name='tempest-TransferEncryptedVolumeTest-938095670',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:54:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-6hv1a9ii',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:54:45Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=7556d66b-f5c2-4050-9684-0e513ae8c697,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.686 239942 DEBUG nova.network.os_vif_util [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "address": "fa:16:3e:8e:64:9e", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1df8885b-d7", "ovs_interfaceid": "1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.687 239942 DEBUG nova.network.os_vif_util [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:64:9e,bridge_name='br-int',has_traffic_filtering=True,id=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1df8885b-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.687 239942 DEBUG os_vif [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:64:9e,bridge_name='br-int',has_traffic_filtering=True,id=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1df8885b-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.690 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.690 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1df8885b-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:21 np0005603435 systemd[1]: libpod-b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc.scope: Deactivated successfully.
Jan 30 23:55:21 np0005603435 podman[264591]: 2026-01-31 04:55:21.691520886 +0000 UTC m=+0.070023830 container stop b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.694 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.697 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.700 239942 INFO os_vif [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:64:9e,bridge_name='br-int',has_traffic_filtering=True,id=1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1df8885b-d7')#033[00m
Jan 30 23:55:21 np0005603435 podman[264591]: 2026-01-31 04:55:21.72142018 +0000 UTC m=+0.099923114 container died b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:55:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc-userdata-shm.mount: Deactivated successfully.
Jan 30 23:55:21 np0005603435 systemd[1]: var-lib-containers-storage-overlay-13bf259a8369c763ae16107b7e1af46eb14f7900586702e98e9ec5bcf7e14484-merged.mount: Deactivated successfully.
Jan 30 23:55:21 np0005603435 podman[264591]: 2026-01-31 04:55:21.773979141 +0000 UTC m=+0.152482095 container cleanup b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:55:21 np0005603435 systemd[1]: libpod-conmon-b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc.scope: Deactivated successfully.
Jan 30 23:55:21 np0005603435 podman[264649]: 2026-01-31 04:55:21.843090897 +0000 UTC m=+0.047527818 container remove b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.848 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a0727f-561f-4b04-ba32-7ad1f9116ee6]: (4, ('Sat Jan 31 04:55:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b (b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc)\nb91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc\nSat Jan 31 04:55:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b (b91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc)\nb91f7dcc82e9880cee70b8817ef6f49ef2fdb0ac6d678d3a841168a0b0d508fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.851 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca43c26-003f-4fce-b7fe-e16da9a52ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.852 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa10d9666-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.854 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 kernel: tapa10d9666-b0: left promiscuous mode
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.862 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.865 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3b716d45-b12b-48ba-92f4-ec6d5ec776f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.876 239942 INFO nova.virt.libvirt.driver [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Deleting instance files /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697_del#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.877 239942 INFO nova.virt.libvirt.driver [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Deletion of /var/lib/nova/instances/7556d66b-f5c2-4050-9684-0e513ae8c697_del complete#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.881 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9c762b39-1338-4d11-a813-76af6df9f4ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.882 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ec39afe2-b565-4972-b1f2-a0cad208ea88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.894 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2c707cf8-c8fe-4b6d-a5f6-781ed1cf1d16]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433699, 'reachable_time': 26060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264667, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:21 np0005603435 systemd[1]: run-netns-ovnmeta\x2da10d9666\x2db672\x2d4619\x2d83b7\x2d22dc781b5b5b.mount: Deactivated successfully.
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.897 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:55:21 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:21.897 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[efb34a80-766a-45f3-9d9b-ba5c4725f9b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.957 239942 INFO nova.compute.manager [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Took 0.53 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.957 239942 DEBUG oslo.service.loopingcall [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.957 239942 DEBUG nova.compute.manager [-] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:55:21 np0005603435 nova_compute[239938]: 2026-01-31 04:55:21.958 239942 DEBUG nova.network.neutron [-] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:55:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Jan 30 23:55:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Jan 30 23:55:22 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.539 239942 DEBUG nova.compute.manager [req-67556cfa-ec91-4540-ad70-55dc0ef7e588 req-3eec0db5-d830-4e95-8ed1-dfa57dcaf78d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received event network-vif-unplugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.540 239942 DEBUG oslo_concurrency.lockutils [req-67556cfa-ec91-4540-ad70-55dc0ef7e588 req-3eec0db5-d830-4e95-8ed1-dfa57dcaf78d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.540 239942 DEBUG oslo_concurrency.lockutils [req-67556cfa-ec91-4540-ad70-55dc0ef7e588 req-3eec0db5-d830-4e95-8ed1-dfa57dcaf78d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.540 239942 DEBUG oslo_concurrency.lockutils [req-67556cfa-ec91-4540-ad70-55dc0ef7e588 req-3eec0db5-d830-4e95-8ed1-dfa57dcaf78d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.540 239942 DEBUG nova.compute.manager [req-67556cfa-ec91-4540-ad70-55dc0ef7e588 req-3eec0db5-d830-4e95-8ed1-dfa57dcaf78d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] No waiting events found dispatching network-vif-unplugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.541 239942 DEBUG nova.compute.manager [req-67556cfa-ec91-4540-ad70-55dc0ef7e588 req-3eec0db5-d830-4e95-8ed1-dfa57dcaf78d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received event network-vif-unplugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:55:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1080062363' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1080062363' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.919 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:55:22 np0005603435 nova_compute[239938]: 2026-01-31 04:55:22.919 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Jan 30 23:55:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:23.087 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.087 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:23.088 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.122 239942 DEBUG nova.network.neutron [-] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.148 239942 INFO nova.compute.manager [-] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Took 1.19 seconds to deallocate network for instance.#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.214 239942 DEBUG nova.compute.manager [req-54a0ea0e-3ced-4069-a304-cba3ce45210f req-c3d31727-6e81-41d6-8c09-85090b9b4c4e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received event network-vif-deleted-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Jan 30 23:55:23 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.356 239942 INFO nova.compute.manager [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.399 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.400 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 11 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 288 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 35 KiB/s wr, 369 op/s
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.483 239942 DEBUG oslo_concurrency.processutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:55:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506839771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.534 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.605 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.606 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.609 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.609 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.729 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.731 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4088MB free_disk=59.98745105974376GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:55:23 np0005603435 nova_compute[239938]: 2026-01-31 04:55:23.731 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:55:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332589826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.017 239942 DEBUG oslo_concurrency.processutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.023 239942 DEBUG nova.compute.provider_tree [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.044 239942 DEBUG nova.scheduler.client.report [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.069 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.073 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.108 239942 INFO nova.scheduler.client.report [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Deleted allocations for instance 7556d66b-f5c2-4050-9684-0e513ae8c697#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.149 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance a7e679f6-843b-49b7-8455-d5ed363e1b37 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.149 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 52b7c210-2041-4375-8361-693e4d450c12 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.150 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.150 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.190 239942 DEBUG oslo_concurrency.lockutils [None req-b0bdd9dd-e502-4b46-a2de-871fa7f9532e 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.210 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.629 239942 DEBUG nova.compute.manager [req-00e1744f-40d1-441d-b2fa-abefe77abe1b req-8016abfb-9610-4ff4-967a-1c0555da8eb4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received event network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.629 239942 DEBUG oslo_concurrency.lockutils [req-00e1744f-40d1-441d-b2fa-abefe77abe1b req-8016abfb-9610-4ff4-967a-1c0555da8eb4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.630 239942 DEBUG oslo_concurrency.lockutils [req-00e1744f-40d1-441d-b2fa-abefe77abe1b req-8016abfb-9610-4ff4-967a-1c0555da8eb4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.630 239942 DEBUG oslo_concurrency.lockutils [req-00e1744f-40d1-441d-b2fa-abefe77abe1b req-8016abfb-9610-4ff4-967a-1c0555da8eb4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "7556d66b-f5c2-4050-9684-0e513ae8c697-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.630 239942 DEBUG nova.compute.manager [req-00e1744f-40d1-441d-b2fa-abefe77abe1b req-8016abfb-9610-4ff4-967a-1c0555da8eb4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] No waiting events found dispatching network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.631 239942 WARNING nova.compute.manager [req-00e1744f-40d1-441d-b2fa-abefe77abe1b req-8016abfb-9610-4ff4-967a-1c0555da8eb4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Received unexpected event network-vif-plugged-1df8885b-d7c2-4ee3-b5f8-ed1ff86d5ea3 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/61860174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.737 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.743 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.761 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.801 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:55:24 np0005603435 nova_compute[239938]: 2026-01-31 04:55:24.802 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2670323619' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2670323619' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:25 np0005603435 nova_compute[239938]: 2026-01-31 04:55:25.071 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Jan 30 23:55:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Jan 30 23:55:25 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Jan 30 23:55:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 34 KiB/s wr, 274 op/s
Jan 30 23:55:25 np0005603435 nova_compute[239938]: 2026-01-31 04:55:25.802 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:25 np0005603435 nova_compute[239938]: 2026-01-31 04:55:25.802 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:55:25 np0005603435 nova_compute[239938]: 2026-01-31 04:55:25.802 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:55:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/775605635' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/775605635' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:25 np0005603435 nova_compute[239938]: 2026-01-31 04:55:25.989 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:55:25 np0005603435 nova_compute[239938]: 2026-01-31 04:55:25.990 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:55:25 np0005603435 nova_compute[239938]: 2026-01-31 04:55:25.990 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 30 23:55:25 np0005603435 nova_compute[239938]: 2026-01-31 04:55:25.991 239942 DEBUG nova.objects.instance [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a7e679f6-843b-49b7-8455-d5ed363e1b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:55:26 np0005603435 nova_compute[239938]: 2026-01-31 04:55:26.695 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 355 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 678 KiB/s wr, 314 op/s
Jan 30 23:55:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2779779731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2779779731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Jan 30 23:55:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Jan 30 23:55:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Jan 30 23:55:28 np0005603435 nova_compute[239938]: 2026-01-31 04:55:28.214 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updating instance_info_cache with network_info: [{"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:55:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:28Z|00036|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.11
Jan 30 23:55:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:28Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:42:5a:23 10.100.0.11
Jan 30 23:55:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 355 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 935 KiB/s rd, 572 KiB/s wr, 153 op/s
Jan 30 23:55:29 np0005603435 nova_compute[239938]: 2026-01-31 04:55:29.688 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-a7e679f6-843b-49b7-8455-d5ed363e1b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:55:29 np0005603435 nova_compute[239938]: 2026-01-31 04:55:29.690 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 30 23:55:29 np0005603435 nova_compute[239938]: 2026-01-31 04:55:29.690 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:29 np0005603435 nova_compute[239938]: 2026-01-31 04:55:29.691 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:55:30 np0005603435 nova_compute[239938]: 2026-01-31 04:55:30.074 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:30 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:30.090 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:30 np0005603435 podman[264735]: 2026-01-31 04:55:30.116419826 +0000 UTC m=+0.081264236 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:55:30 np0005603435 podman[264736]: 2026-01-31 04:55:30.215054917 +0000 UTC m=+0.180087172 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 30 23:55:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 300 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 851 KiB/s wr, 166 op/s
Jan 30 23:55:31 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:31Z|00038|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.11
Jan 30 23:55:31 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:31Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:42:5a:23 10.100.0.11
Jan 30 23:55:31 np0005603435 nova_compute[239938]: 2026-01-31 04:55:31.697 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1850810330' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1850810330' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Jan 30 23:55:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Jan 30 23:55:33 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Jan 30 23:55:33 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:33Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:5a:23 10.100.0.11
Jan 30 23:55:33 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:33Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:5a:23 10.100.0.11
Jan 30 23:55:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 182 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 749 KiB/s wr, 191 op/s
Jan 30 23:55:35 np0005603435 nova_compute[239938]: 2026-01-31 04:55:35.077 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 182 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 336 KiB/s wr, 84 op/s
Jan 30 23:55:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2763159688' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2763159688' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:36 np0005603435 nova_compute[239938]: 2026-01-31 04:55:36.656 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835321.6549845, 7556d66b-f5c2-4050-9684-0e513ae8c697 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:55:36 np0005603435 nova_compute[239938]: 2026-01-31 04:55:36.657 239942 INFO nova.compute.manager [-] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:55:36 np0005603435 nova_compute[239938]: 2026-01-31 04:55:36.700 239942 DEBUG nova.compute.manager [None req-a75daea7-6256-45c4-9b45-3d81f0381950 - - - - - -] [instance: 7556d66b-f5c2-4050-9684-0e513ae8c697] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:55:36 np0005603435 nova_compute[239938]: 2026-01-31 04:55:36.700 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:55:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:55:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:55:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:55:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:55:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:55:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137462781' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137462781' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 185 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 340 KiB/s wr, 100 op/s
Jan 30 23:55:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 185 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 950 KiB/s rd, 314 KiB/s wr, 92 op/s
Jan 30 23:55:40 np0005603435 nova_compute[239938]: 2026-01-31 04:55:40.118 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3886280211' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3886280211' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 185 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 799 KiB/s rd, 64 KiB/s wr, 70 op/s
Jan 30 23:55:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/681809116' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/681809116' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:41 np0005603435 nova_compute[239938]: 2026-01-31 04:55:41.702 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1579149506' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1579149506' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 185 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 62 KiB/s wr, 67 op/s
Jan 30 23:55:45 np0005603435 nova_compute[239938]: 2026-01-31 04:55:45.164 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 185 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 55 KiB/s wr, 80 op/s
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:55:46 np0005603435 podman[264920]: 2026-01-31 04:55:46.473533311 +0000 UTC m=+0.056719584 container create 2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:55:46 np0005603435 systemd[1]: Started libpod-conmon-2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1.scope.
Jan 30 23:55:46 np0005603435 podman[264920]: 2026-01-31 04:55:46.447296867 +0000 UTC m=+0.030483110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:55:46 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:55:46 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:55:46 np0005603435 podman[264920]: 2026-01-31 04:55:46.569187369 +0000 UTC m=+0.152373692 container init 2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 30 23:55:46 np0005603435 podman[264920]: 2026-01-31 04:55:46.57816152 +0000 UTC m=+0.161347803 container start 2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:55:46 np0005603435 podman[264920]: 2026-01-31 04:55:46.582603589 +0000 UTC m=+0.165789872 container attach 2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:55:46 np0005603435 eloquent_bartik[264937]: 167 167
Jan 30 23:55:46 np0005603435 systemd[1]: libpod-2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1.scope: Deactivated successfully.
Jan 30 23:55:46 np0005603435 podman[264920]: 2026-01-31 04:55:46.586640178 +0000 UTC m=+0.169826441 container died 2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:55:46 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9afcb0ba2224590346a24ca9897d9f2afe7ac5fefb7b57f2ca0356c6186f34fe-merged.mount: Deactivated successfully.
Jan 30 23:55:46 np0005603435 podman[264920]: 2026-01-31 04:55:46.632855302 +0000 UTC m=+0.216041545 container remove 2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 30 23:55:46 np0005603435 systemd[1]: libpod-conmon-2c2e4a7a13a2e89e1b0aeef20b583d38b7e305f29c20bb1ff9ca317a7e9b72d1.scope: Deactivated successfully.
Jan 30 23:55:46 np0005603435 nova_compute[239938]: 2026-01-31 04:55:46.704 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:46 np0005603435 podman[264963]: 2026-01-31 04:55:46.809381116 +0000 UTC m=+0.059854830 container create 3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ritchie, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:55:46 np0005603435 systemd[1]: Started libpod-conmon-3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874.scope.
Jan 30 23:55:46 np0005603435 podman[264963]: 2026-01-31 04:55:46.78508522 +0000 UTC m=+0.035558984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:55:46 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:55:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33551f87b7b94671fd17dd3cc415ee787251214386bea551398bed6ff4944504/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33551f87b7b94671fd17dd3cc415ee787251214386bea551398bed6ff4944504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33551f87b7b94671fd17dd3cc415ee787251214386bea551398bed6ff4944504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33551f87b7b94671fd17dd3cc415ee787251214386bea551398bed6ff4944504/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:46 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33551f87b7b94671fd17dd3cc415ee787251214386bea551398bed6ff4944504/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:46 np0005603435 podman[264963]: 2026-01-31 04:55:46.924596345 +0000 UTC m=+0.175070079 container init 3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ritchie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 30 23:55:46 np0005603435 podman[264963]: 2026-01-31 04:55:46.933450782 +0000 UTC m=+0.183924476 container start 3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:55:46 np0005603435 podman[264963]: 2026-01-31 04:55:46.936747053 +0000 UTC m=+0.187220737 container attach 3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 30 23:55:47 np0005603435 keen_ritchie[264980]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:55:47 np0005603435 keen_ritchie[264980]: --> All data devices are unavailable
Jan 30 23:55:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 185 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 70 KiB/s wr, 86 op/s
Jan 30 23:55:47 np0005603435 systemd[1]: libpod-3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874.scope: Deactivated successfully.
Jan 30 23:55:47 np0005603435 podman[264963]: 2026-01-31 04:55:47.473941472 +0000 UTC m=+0.724415176 container died 3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ritchie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:55:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay-33551f87b7b94671fd17dd3cc415ee787251214386bea551398bed6ff4944504-merged.mount: Deactivated successfully.
Jan 30 23:55:47 np0005603435 podman[264963]: 2026-01-31 04:55:47.516597849 +0000 UTC m=+0.767071563 container remove 3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ritchie, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:55:47 np0005603435 systemd[1]: libpod-conmon-3a02e1b0641ecb9eddf2683929dba0ce87c3658c0e1d49d782af5edc7cfa7874.scope: Deactivated successfully.
Jan 30 23:55:48 np0005603435 podman[265073]: 2026-01-31 04:55:48.019891525 +0000 UTC m=+0.051199698 container create 64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:55:48 np0005603435 systemd[1]: Started libpod-conmon-64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988.scope.
Jan 30 23:55:48 np0005603435 podman[265073]: 2026-01-31 04:55:47.997589987 +0000 UTC m=+0.028898160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:55:48 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:55:48 np0005603435 podman[265073]: 2026-01-31 04:55:48.111019572 +0000 UTC m=+0.142327735 container init 64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 30 23:55:48 np0005603435 podman[265073]: 2026-01-31 04:55:48.119264265 +0000 UTC m=+0.150572448 container start 64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:55:48 np0005603435 podman[265073]: 2026-01-31 04:55:48.123335695 +0000 UTC m=+0.154643888 container attach 64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:55:48 np0005603435 cool_curran[265090]: 167 167
Jan 30 23:55:48 np0005603435 systemd[1]: libpod-64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988.scope: Deactivated successfully.
Jan 30 23:55:48 np0005603435 conmon[265090]: conmon 64a5b46fc6e34593f8db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988.scope/container/memory.events
Jan 30 23:55:48 np0005603435 podman[265073]: 2026-01-31 04:55:48.12804204 +0000 UTC m=+0.159350183 container died 64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:55:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4ab6e78e97fd6393c310997085a632fc7103a92d7477b5f7d1655cc91429985d-merged.mount: Deactivated successfully.
Jan 30 23:55:48 np0005603435 podman[265073]: 2026-01-31 04:55:48.165070599 +0000 UTC m=+0.196378742 container remove 64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:55:48 np0005603435 systemd[1]: libpod-conmon-64a5b46fc6e34593f8db89e0b0ada5c2e0cb4003f4c0c67a2fd684f227f19988.scope: Deactivated successfully.
Jan 30 23:55:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:48 np0005603435 podman[265113]: 2026-01-31 04:55:48.352770018 +0000 UTC m=+0.061496151 container create 5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 30 23:55:48 np0005603435 systemd[1]: Started libpod-conmon-5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58.scope.
Jan 30 23:55:48 np0005603435 podman[265113]: 2026-01-31 04:55:48.327143068 +0000 UTC m=+0.035869261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:55:48 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:55:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495d2f44cefdeadfda66336e076588a59b809c93818ddc3afcd905a7bc05ec56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495d2f44cefdeadfda66336e076588a59b809c93818ddc3afcd905a7bc05ec56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495d2f44cefdeadfda66336e076588a59b809c93818ddc3afcd905a7bc05ec56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:48 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495d2f44cefdeadfda66336e076588a59b809c93818ddc3afcd905a7bc05ec56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:48 np0005603435 podman[265113]: 2026-01-31 04:55:48.452713211 +0000 UTC m=+0.161439334 container init 5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:55:48 np0005603435 podman[265113]: 2026-01-31 04:55:48.468163441 +0000 UTC m=+0.176889574 container start 5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:55:48 np0005603435 podman[265113]: 2026-01-31 04:55:48.473886851 +0000 UTC m=+0.182612954 container attach 5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]: {
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:    "0": [
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:        {
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "devices": [
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "/dev/loop3"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            ],
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_name": "ceph_lv0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_size": "21470642176",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "name": "ceph_lv0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "tags": {
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cluster_name": "ceph",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.crush_device_class": "",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.encrypted": "0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.objectstore": "bluestore",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osd_id": "0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.type": "block",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.vdo": "0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.with_tpm": "0"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            },
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "type": "block",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "vg_name": "ceph_vg0"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:        }
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:    ],
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:    "1": [
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:        {
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "devices": [
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "/dev/loop4"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            ],
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_name": "ceph_lv1",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_size": "21470642176",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "name": "ceph_lv1",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "tags": {
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cluster_name": "ceph",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.crush_device_class": "",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.encrypted": "0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.objectstore": "bluestore",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osd_id": "1",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.type": "block",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.vdo": "0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.with_tpm": "0"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            },
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "type": "block",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "vg_name": "ceph_vg1"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:        }
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:    ],
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:    "2": [
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:        {
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "devices": [
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "/dev/loop5"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            ],
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_name": "ceph_lv2",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_size": "21470642176",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "name": "ceph_lv2",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "tags": {
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.cluster_name": "ceph",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.crush_device_class": "",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.encrypted": "0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.objectstore": "bluestore",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osd_id": "2",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.type": "block",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.vdo": "0",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:                "ceph.with_tpm": "0"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            },
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "type": "block",
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:            "vg_name": "ceph_vg2"
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:        }
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]:    ]
Jan 30 23:55:48 np0005603435 beautiful_heisenberg[265130]: }
Jan 30 23:55:48 np0005603435 systemd[1]: libpod-5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58.scope: Deactivated successfully.
Jan 30 23:55:48 np0005603435 podman[265113]: 2026-01-31 04:55:48.782940549 +0000 UTC m=+0.491666672 container died 5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_heisenberg, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:55:48 np0005603435 systemd[1]: var-lib-containers-storage-overlay-495d2f44cefdeadfda66336e076588a59b809c93818ddc3afcd905a7bc05ec56-merged.mount: Deactivated successfully.
Jan 30 23:55:48 np0005603435 podman[265113]: 2026-01-31 04:55:48.833674114 +0000 UTC m=+0.542400247 container remove 5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 30 23:55:48 np0005603435 systemd[1]: libpod-conmon-5de20160f790ec070efd5806b104110f7844f0f0f202ced60b0016d74e06af58.scope: Deactivated successfully.
Jan 30 23:55:49 np0005603435 podman[265214]: 2026-01-31 04:55:49.369801657 +0000 UTC m=+0.056682353 container create c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_lamarr, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:55:49 np0005603435 systemd[1]: Started libpod-conmon-c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f.scope.
Jan 30 23:55:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 185 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 65 op/s
Jan 30 23:55:49 np0005603435 podman[265214]: 2026-01-31 04:55:49.348886334 +0000 UTC m=+0.035767020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:55:49 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:55:49 np0005603435 podman[265214]: 2026-01-31 04:55:49.465954488 +0000 UTC m=+0.152835214 container init c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:55:49 np0005603435 podman[265214]: 2026-01-31 04:55:49.473494873 +0000 UTC m=+0.160375569 container start c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:55:49 np0005603435 podman[265214]: 2026-01-31 04:55:49.476913677 +0000 UTC m=+0.163794373 container attach c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 30 23:55:49 np0005603435 naughty_lamarr[265231]: 167 167
Jan 30 23:55:49 np0005603435 systemd[1]: libpod-c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f.scope: Deactivated successfully.
Jan 30 23:55:49 np0005603435 conmon[265231]: conmon c2c568016f6e03e3f61d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f.scope/container/memory.events
Jan 30 23:55:49 np0005603435 podman[265214]: 2026-01-31 04:55:49.481013607 +0000 UTC m=+0.167894293 container died c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:55:49 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9338a1f99c7f739d227267525f0385941d50af0fa7fd25ec563e329a8be12081-merged.mount: Deactivated successfully.
Jan 30 23:55:49 np0005603435 podman[265214]: 2026-01-31 04:55:49.532109182 +0000 UTC m=+0.218989858 container remove c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_lamarr, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:55:49 np0005603435 systemd[1]: libpod-conmon-c2c568016f6e03e3f61d3e45f27bd70fd8ca4a7ec2ee17153d5d348beae86b1f.scope: Deactivated successfully.
Jan 30 23:55:49 np0005603435 podman[265256]: 2026-01-31 04:55:49.718753294 +0000 UTC m=+0.052393407 container create 8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_kalam, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 30 23:55:49 np0005603435 systemd[1]: Started libpod-conmon-8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c.scope.
Jan 30 23:55:49 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:55:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73460b0243d4eefc2e7cd0d5af0ea0bcf8d1e4eb4866a340f46355bb7311fbac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73460b0243d4eefc2e7cd0d5af0ea0bcf8d1e4eb4866a340f46355bb7311fbac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73460b0243d4eefc2e7cd0d5af0ea0bcf8d1e4eb4866a340f46355bb7311fbac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:49 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73460b0243d4eefc2e7cd0d5af0ea0bcf8d1e4eb4866a340f46355bb7311fbac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:55:49 np0005603435 podman[265256]: 2026-01-31 04:55:49.696310143 +0000 UTC m=+0.029950296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:55:49 np0005603435 podman[265256]: 2026-01-31 04:55:49.796408991 +0000 UTC m=+0.130049114 container init 8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_kalam, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:55:49 np0005603435 podman[265256]: 2026-01-31 04:55:49.805574546 +0000 UTC m=+0.139214689 container start 8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:55:49 np0005603435 podman[265256]: 2026-01-31 04:55:49.809581664 +0000 UTC m=+0.143221797 container attach 8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.166 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:50 np0005603435 lvm[265353]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:55:50 np0005603435 lvm[265353]: VG ceph_vg1 finished
Jan 30 23:55:50 np0005603435 lvm[265355]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:55:50 np0005603435 lvm[265355]: VG ceph_vg2 finished
Jan 30 23:55:50 np0005603435 lvm[265352]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:55:50 np0005603435 lvm[265352]: VG ceph_vg0 finished
Jan 30 23:55:50 np0005603435 nifty_kalam[265273]: {}
Jan 30 23:55:50 np0005603435 systemd[1]: libpod-8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c.scope: Deactivated successfully.
Jan 30 23:55:50 np0005603435 systemd[1]: libpod-8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c.scope: Consumed 1.154s CPU time.
Jan 30 23:55:50 np0005603435 podman[265256]: 2026-01-31 04:55:50.625874355 +0000 UTC m=+0.959514488 container died 8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_kalam, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 30 23:55:50 np0005603435 systemd[1]: var-lib-containers-storage-overlay-73460b0243d4eefc2e7cd0d5af0ea0bcf8d1e4eb4866a340f46355bb7311fbac-merged.mount: Deactivated successfully.
Jan 30 23:55:50 np0005603435 podman[265256]: 2026-01-31 04:55:50.68143986 +0000 UTC m=+1.015080003 container remove 8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_kalam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:55:50 np0005603435 systemd[1]: libpod-conmon-8476eb24130936b8b0a88cc23cf360d4ac5758583ecb26a0330224e41ac4f60c.scope: Deactivated successfully.
Jan 30 23:55:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.735 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "52b7c210-2041-4375-8361-693e4d450c12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.736 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.737 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "52b7c210-2041-4375-8361-693e4d450c12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.738 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.738 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.740 239942 INFO nova.compute.manager [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Terminating instance#033[00m
Jan 30 23:55:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.742 239942 DEBUG nova.compute.manager [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:55:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:55:50 np0005603435 kernel: tap8495b99c-f8 (unregistering): left promiscuous mode
Jan 30 23:55:50 np0005603435 NetworkManager[49097]: <info>  [1769835350.7923] device (tap8495b99c-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:55:50 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:50Z|00192|binding|INFO|Releasing lport 8495b99c-f86f-4ebe-8135-5c903d896bc1 from this chassis (sb_readonly=0)
Jan 30 23:55:50 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:50Z|00193|binding|INFO|Setting lport 8495b99c-f86f-4ebe-8135-5c903d896bc1 down in Southbound
Jan 30 23:55:50 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:50Z|00194|binding|INFO|Removing iface tap8495b99c-f8 ovn-installed in OVS
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.801 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.803 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.814 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.824 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:5a:23 10.100.0.11'], port_security=['fa:16:3e:42:5a:23 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '52b7c210-2041-4375-8361-693e4d450c12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '553f37d1-f94c-4459-b208-0a6d3389632b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=8495b99c-f86f-4ebe-8135-5c903d896bc1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.826 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 8495b99c-f86f-4ebe-8135-5c903d896bc1 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.828 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.838 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cb420e16-1841-4a7b-a070-a231256eb047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:50 np0005603435 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 30 23:55:50 np0005603435 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 12.745s CPU time.
Jan 30 23:55:50 np0005603435 systemd-machined[208030]: Machine qemu-19-instance-00000013 terminated.
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.853 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[72178493-8229-4583-bf5a-ebf8c5a4cc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.856 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ee3e60-fff4-424e-95c2-18060651fe68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.874 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[6abb5f59-386e-41ae-809a-47e2ad0e1196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.886 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8933a5-5b65-4d36-a10c-8a151f1c2abe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431748, 'reachable_time': 30176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265403, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.898 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f629f0fd-0d34-47f6-89c8-6862682ccd70]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b0cf2db-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431757, 'tstamp': 431757}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265404, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b0cf2db-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431760, 'tstamp': 431760}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265404, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.899 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.900 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.904 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.905 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.906 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.906 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:50.907 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:55:50 np0005603435 NetworkManager[49097]: <info>  [1769835350.9591] manager: (tap8495b99c-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.978 239942 INFO nova.virt.libvirt.driver [-] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Instance destroyed successfully.#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.978 239942 DEBUG nova.objects.instance [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'resources' on Instance uuid 52b7c210-2041-4375-8361-693e4d450c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.996 239942 DEBUG nova.virt.libvirt.vif [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:55:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-42036910',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-42036910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-42036910',id=19,image_ref='abcc362a-746d-4429-8460-d5477e9109d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM09tA+o/X5eAA7F61hj1SxI6ypExfxHwu84ZGI766nI5MvxnQexsz7kbcbQ7kayV4aYCWWp0LzpaRSvNR2iXXookyyAVTplj7M1+4fZNIZ0rEyvgKI3UsNIqdXjZGP7eQ==',key_name='tempest-keypair-792334322',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:55:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-srbwo0np',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1782423025',image_owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:55:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e10f13b98624406985dec6a5dcc391c7',uuid=52b7c210-2041-4375-8361-693e4d450c12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.996 239942 DEBUG nova.network.os_vif_util [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "address": "fa:16:3e:42:5a:23", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8495b99c-f8", "ovs_interfaceid": "8495b99c-f86f-4ebe-8135-5c903d896bc1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.997 239942 DEBUG nova.network.os_vif_util [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:5a:23,bridge_name='br-int',has_traffic_filtering=True,id=8495b99c-f86f-4ebe-8135-5c903d896bc1,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8495b99c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.997 239942 DEBUG os_vif [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:5a:23,bridge_name='br-int',has_traffic_filtering=True,id=8495b99c-f86f-4ebe-8135-5c903d896bc1,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8495b99c-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.999 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:50 np0005603435 nova_compute[239938]: 2026-01-31 04:55:50.999 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8495b99c-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.000 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.002 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.005 239942 INFO os_vif [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:5a:23,bridge_name='br-int',has_traffic_filtering=True,id=8495b99c-f86f-4ebe-8135-5c903d896bc1,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8495b99c-f8')#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.076 239942 DEBUG nova.compute.manager [req-61a81dbc-0cee-46df-9478-015bf8914933 req-9501dff1-0a37-413e-9c91-53a85e7f740f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received event network-vif-unplugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.076 239942 DEBUG oslo_concurrency.lockutils [req-61a81dbc-0cee-46df-9478-015bf8914933 req-9501dff1-0a37-413e-9c91-53a85e7f740f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "52b7c210-2041-4375-8361-693e4d450c12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.077 239942 DEBUG oslo_concurrency.lockutils [req-61a81dbc-0cee-46df-9478-015bf8914933 req-9501dff1-0a37-413e-9c91-53a85e7f740f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.077 239942 DEBUG oslo_concurrency.lockutils [req-61a81dbc-0cee-46df-9478-015bf8914933 req-9501dff1-0a37-413e-9c91-53a85e7f740f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.077 239942 DEBUG nova.compute.manager [req-61a81dbc-0cee-46df-9478-015bf8914933 req-9501dff1-0a37-413e-9c91-53a85e7f740f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] No waiting events found dispatching network-vif-unplugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.078 239942 DEBUG nova.compute.manager [req-61a81dbc-0cee-46df-9478-015bf8914933 req-9501dff1-0a37-413e-9c91-53a85e7f740f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received event network-vif-unplugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.176 239942 INFO nova.virt.libvirt.driver [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Deleting instance files /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12_del#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.176 239942 INFO nova.virt.libvirt.driver [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Deletion of /var/lib/nova/instances/52b7c210-2041-4375-8361-693e4d450c12_del complete#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.237 239942 INFO nova.compute.manager [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Took 0.50 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.238 239942 DEBUG oslo.service.loopingcall [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.239 239942 DEBUG nova.compute.manager [-] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:55:51 np0005603435 nova_compute[239938]: 2026-01-31 04:55:51.239 239942 DEBUG nova.network.neutron [-] [instance: 52b7c210-2041-4375-8361-693e4d450c12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:55:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 185 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 66 op/s
Jan 30 23:55:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Jan 30 23:55:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Jan 30 23:55:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Jan 30 23:55:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:55:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.171 239942 DEBUG nova.network.neutron [-] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.191 239942 INFO nova.compute.manager [-] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Took 0.95 seconds to deallocate network for instance.#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.296 239942 DEBUG nova.compute.manager [req-1bc7eee3-10d7-42bd-8294-1369b5ee23c8 req-71c1e2c6-5041-445e-8e9b-fbdd00c3600a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received event network-vif-deleted-8495b99c-f86f-4ebe-8135-5c903d896bc1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.453 239942 INFO nova.compute.manager [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Took 0.26 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.454 239942 DEBUG nova.compute.manager [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Deleting volume: efb11444-7e28-4080-bd22-6f436b9dbf14 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.790 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.790 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.819 239942 DEBUG nova.scheduler.client.report [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Refreshing inventories for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.859 239942 DEBUG nova.scheduler.client.report [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Updating ProviderTree inventory for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.859 239942 DEBUG nova.compute.provider_tree [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.873 239942 DEBUG nova.scheduler.client.report [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Refreshing aggregate associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 30 23:55:52 np0005603435 nova_compute[239938]: 2026-01-31 04:55:52.952 239942 DEBUG nova.scheduler.client.report [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Refreshing trait associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, traits: COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.022 239942 DEBUG oslo_concurrency.processutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.197 239942 DEBUG nova.compute.manager [req-6edec05d-2626-4d31-aa92-9cdb2f760dae req-c3d3aee4-68fa-4a61-bd8d-d6099e0bdf5e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received event network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.199 239942 DEBUG oslo_concurrency.lockutils [req-6edec05d-2626-4d31-aa92-9cdb2f760dae req-c3d3aee4-68fa-4a61-bd8d-d6099e0bdf5e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "52b7c210-2041-4375-8361-693e4d450c12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.199 239942 DEBUG oslo_concurrency.lockutils [req-6edec05d-2626-4d31-aa92-9cdb2f760dae req-c3d3aee4-68fa-4a61-bd8d-d6099e0bdf5e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.200 239942 DEBUG oslo_concurrency.lockutils [req-6edec05d-2626-4d31-aa92-9cdb2f760dae req-c3d3aee4-68fa-4a61-bd8d-d6099e0bdf5e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.200 239942 DEBUG nova.compute.manager [req-6edec05d-2626-4d31-aa92-9cdb2f760dae req-c3d3aee4-68fa-4a61-bd8d-d6099e0bdf5e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] No waiting events found dispatching network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.201 239942 WARNING nova.compute.manager [req-6edec05d-2626-4d31-aa92-9cdb2f760dae req-c3d3aee4-68fa-4a61-bd8d-d6099e0bdf5e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Received unexpected event network-vif-plugged-8495b99c-f86f-4ebe-8135-5c903d896bc1 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:55:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 189 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 264 KiB/s rd, 262 KiB/s wr, 61 op/s
Jan 30 23:55:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394540438' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394540438' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:55:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:55:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596009599' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.567 239942 DEBUG oslo_concurrency.processutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.574 239942 DEBUG nova.compute.provider_tree [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.595 239942 DEBUG nova.scheduler.client.report [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.629 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.873 239942 INFO nova.scheduler.client.report [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Deleted allocations for instance 52b7c210-2041-4375-8361-693e4d450c12#033[00m
Jan 30 23:55:53 np0005603435 nova_compute[239938]: 2026-01-31 04:55:53.963 239942 DEBUG oslo_concurrency.lockutils [None req-e5635ff4-e92b-4f19-8208-0a083b35593b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "52b7c210-2041-4375-8361-693e4d450c12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Jan 30 23:55:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Jan 30 23:55:54 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Jan 30 23:55:55 np0005603435 nova_compute[239938]: 2026-01-31 04:55:55.167 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 181 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 288 KiB/s wr, 56 op/s
Jan 30 23:55:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:55.920 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:55.920 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:55.921 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:56 np0005603435 nova_compute[239938]: 2026-01-31 04:55:56.001 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Jan 30 23:55:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Jan 30 23:55:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Jan 30 23:55:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 387 KiB/s wr, 171 op/s
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.756 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "a7e679f6-843b-49b7-8455-d5ed363e1b37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.757 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.758 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.758 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.759 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.761 239942 INFO nova.compute.manager [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Terminating instance#033[00m
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.763 239942 DEBUG nova.compute.manager [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:55:57 np0005603435 kernel: tap9bfb8d4f-c1 (unregistering): left promiscuous mode
Jan 30 23:55:57 np0005603435 NetworkManager[49097]: <info>  [1769835357.8174] device (tap9bfb8d4f-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.824 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:57Z|00195|binding|INFO|Releasing lport 9bfb8d4f-c12b-4a91-950a-4519f14d6508 from this chassis (sb_readonly=0)
Jan 30 23:55:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:57Z|00196|binding|INFO|Setting lport 9bfb8d4f-c12b-4a91-950a-4519f14d6508 down in Southbound
Jan 30 23:55:57 np0005603435 ovn_controller[145670]: 2026-01-31T04:55:57Z|00197|binding|INFO|Removing iface tap9bfb8d4f-c1 ovn-installed in OVS
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.828 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:57.834 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:7f:92 10.100.0.5'], port_security=['fa:16:3e:c0:7f:92 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a7e679f6-843b-49b7-8455-d5ed363e1b37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f8cd9ed-4d8b-4b1c-bbb9-b9d75bc8e46f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=9bfb8d4f-c12b-4a91-950a-4519f14d6508) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:55:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:57.836 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 9bfb8d4f-c12b-4a91-950a-4519f14d6508 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:55:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:57.838 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:55:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:57.839 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c43d067b-556f-4593-9e62-191032e164b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:57 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:57.840 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace which is not needed anymore#033[00m
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.845 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:57 np0005603435 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 30 23:55:57 np0005603435 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 15.648s CPU time.
Jan 30 23:55:57 np0005603435 systemd-machined[208030]: Machine qemu-17-instance-00000011 terminated.
Jan 30 23:55:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[263353]: [NOTICE]   (263357) : haproxy version is 2.8.14-c23fe91
Jan 30 23:55:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[263353]: [NOTICE]   (263357) : path to executable is /usr/sbin/haproxy
Jan 30 23:55:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[263353]: [WARNING]  (263357) : Exiting Master process...
Jan 30 23:55:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[263353]: [WARNING]  (263357) : Exiting Master process...
Jan 30 23:55:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[263353]: [ALERT]    (263357) : Current worker (263360) exited with code 143 (Terminated)
Jan 30 23:55:57 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[263353]: [WARNING]  (263357) : All workers exited. Exiting... (0)
Jan 30 23:55:57 np0005603435 systemd[1]: libpod-692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b.scope: Deactivated successfully.
Jan 30 23:55:57 np0005603435 conmon[263353]: conmon 692581372fcefb6e691a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b.scope/container/memory.events
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.986 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:57 np0005603435 podman[265484]: 2026-01-31 04:55:57.99152895 +0000 UTC m=+0.054434188 container died 692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 30 23:55:57 np0005603435 nova_compute[239938]: 2026-01-31 04:55:57.994 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.001 239942 INFO nova.virt.libvirt.driver [-] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Instance destroyed successfully.#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.001 239942 DEBUG nova.objects.instance [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'resources' on Instance uuid a7e679f6-843b-49b7-8455-d5ed363e1b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.017 239942 DEBUG nova.virt.libvirt.vif [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:54:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-834746693',display_name='tempest-TestVolumeBootPattern-volume-backed-server-834746693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-834746693',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKAFEDOLl5nmr38YCtZKugulPS1xzLW2VjEPQiweluSJcGVnuwSvDq1lDFjz/tr8fZOa+Jq6UErMuT+akiSqjrhbBgKwkIqglp//7KbJDiOMQLMS6MMZFzd797gJsRRj3Q==',key_name='tempest-keypair-112238935',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:54:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-sbezkyal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:54:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e10f13b98624406985dec6a5dcc391c7',uuid=a7e679f6-843b-49b7-8455-d5ed363e1b37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.018 239942 DEBUG nova.network.os_vif_util [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "address": "fa:16:3e:c0:7f:92", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bfb8d4f-c1", "ovs_interfaceid": "9bfb8d4f-c12b-4a91-950a-4519f14d6508", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.019 239942 DEBUG nova.network.os_vif_util [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=9bfb8d4f-c12b-4a91-950a-4519f14d6508,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bfb8d4f-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.020 239942 DEBUG os_vif [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=9bfb8d4f-c12b-4a91-950a-4519f14d6508,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bfb8d4f-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.023 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.024 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bfb8d4f-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.026 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.028 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.031 239942 INFO os_vif [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:7f:92,bridge_name='br-int',has_traffic_filtering=True,id=9bfb8d4f-c12b-4a91-950a-4519f14d6508,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bfb8d4f-c1')#033[00m
Jan 30 23:55:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b-userdata-shm.mount: Deactivated successfully.
Jan 30 23:55:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ab7c147f6ba0a2a7a163f41364eabddbee9802aa731d2110994f6c996f82bb35-merged.mount: Deactivated successfully.
Jan 30 23:55:58 np0005603435 podman[265484]: 2026-01-31 04:55:58.04776282 +0000 UTC m=+0.110668058 container cleanup 692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:55:58 np0005603435 systemd[1]: libpod-conmon-692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b.scope: Deactivated successfully.
Jan 30 23:55:58 np0005603435 podman[265535]: 2026-01-31 04:55:58.121156392 +0000 UTC m=+0.050788368 container remove 692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.126 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[073d374b-974d-4ec0-a221-159bd40cabe5]: (4, ('Sat Jan 31 04:55:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b)\n692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b\nSat Jan 31 04:55:58 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b)\n692581372fcefb6e691a1cd9aebc58c7eab6902ae8793254777362fd43afb80b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.128 239942 DEBUG nova.compute.manager [req-9c7f6d65-fdae-4d99-af36-661ab09d1f13 req-63cef0e1-b3d9-48e7-b5d9-06fc11d64f50 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received event network-vif-unplugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.128 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0fb0a9-8942-4211-a586-bb15006a38dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.128 239942 DEBUG oslo_concurrency.lockutils [req-9c7f6d65-fdae-4d99-af36-661ab09d1f13 req-63cef0e1-b3d9-48e7-b5d9-06fc11d64f50 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.128 239942 DEBUG oslo_concurrency.lockutils [req-9c7f6d65-fdae-4d99-af36-661ab09d1f13 req-63cef0e1-b3d9-48e7-b5d9-06fc11d64f50 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.129 239942 DEBUG oslo_concurrency.lockutils [req-9c7f6d65-fdae-4d99-af36-661ab09d1f13 req-63cef0e1-b3d9-48e7-b5d9-06fc11d64f50 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.129 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.129 239942 DEBUG nova.compute.manager [req-9c7f6d65-fdae-4d99-af36-661ab09d1f13 req-63cef0e1-b3d9-48e7-b5d9-06fc11d64f50 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] No waiting events found dispatching network-vif-unplugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.129 239942 DEBUG nova.compute.manager [req-9c7f6d65-fdae-4d99-af36-661ab09d1f13 req-63cef0e1-b3d9-48e7-b5d9-06fc11d64f50 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received event network-vif-unplugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.130 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:58 np0005603435 kernel: tap5b0cf2db-20: left promiscuous mode
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.139 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.140 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.145 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bca48f1b-64d3-446b-ae88-66346ff1b133]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.162 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[adeb6390-e8ae-4eae-929e-e353657ee6e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.164 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[72b9f131-ff67-4410-9ea8-677db9ae5c52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.177 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[aa766c31-d4aa-4be9-85dd-02ca51836e3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431740, 'reachable_time': 23767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265558, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:58 np0005603435 systemd[1]: run-netns-ovnmeta\x2d5b0cf2db\x2d2e35\x2d41fa\x2d9783\x2d30f0fe6ea7a3.mount: Deactivated successfully.
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.181 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:55:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:55:58.181 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[be380487-c1e2-4dfd-9abb-2da37b6920f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.199 239942 INFO nova.virt.libvirt.driver [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Deleting instance files /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37_del#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.199 239942 INFO nova.virt.libvirt.driver [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Deletion of /var/lib/nova/instances/a7e679f6-843b-49b7-8455-d5ed363e1b37_del complete#033[00m
Jan 30 23:55:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.257 239942 INFO nova.compute.manager [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.257 239942 DEBUG oslo.service.loopingcall [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.257 239942 DEBUG nova.compute.manager [-] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:55:58 np0005603435 nova_compute[239938]: 2026-01-31 04:55:58.257 239942 DEBUG nova.network.neutron [-] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:55:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Jan 30 23:55:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Jan 30 23:55:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Jan 30 23:55:59 np0005603435 nova_compute[239938]: 2026-01-31 04:55:59.023 239942 DEBUG nova.network.neutron [-] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:55:59 np0005603435 nova_compute[239938]: 2026-01-31 04:55:59.045 239942 INFO nova.compute.manager [-] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Took 0.79 seconds to deallocate network for instance.#033[00m
Jan 30 23:55:59 np0005603435 nova_compute[239938]: 2026-01-31 04:55:59.131 239942 DEBUG nova.compute.manager [req-e36bc1a9-70aa-4071-89d0-8e31a0aa9a4c req-049b259d-0773-4524-8133-65134fe42a8e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received event network-vif-deleted-9bfb8d4f-c12b-4a91-950a-4519f14d6508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:55:59 np0005603435 nova_compute[239938]: 2026-01-31 04:55:59.252 239942 INFO nova.compute.manager [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:55:59 np0005603435 nova_compute[239938]: 2026-01-31 04:55:59.254 239942 DEBUG nova.compute.manager [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Deleting volume: f9e8fb71-b06e-4c8d-914d-ae02de4b66fb _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 30 23:55:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 167 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 6.8 KiB/s wr, 134 op/s
Jan 30 23:55:59 np0005603435 nova_compute[239938]: 2026-01-31 04:55:59.465 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:55:59 np0005603435 nova_compute[239938]: 2026-01-31 04:55:59.466 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:55:59 np0005603435 nova_compute[239938]: 2026-01-31 04:55:59.528 239942 DEBUG oslo_concurrency.processutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:55:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:55:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/177148563' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:55:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:55:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/177148563' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399838756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.068 239942 DEBUG oslo_concurrency.processutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.074 239942 DEBUG nova.compute.provider_tree [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.169 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.304 239942 DEBUG nova.scheduler.client.report [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.476 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:00 np0005603435 podman[265582]: 2026-01-31 04:56:00.599082508 +0000 UTC m=+0.084720111 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 30 23:56:00 np0005603435 podman[265583]: 2026-01-31 04:56:00.602272296 +0000 UTC m=+0.087798216 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.633 239942 DEBUG nova.compute.manager [req-02ff35ea-be73-4dcb-b5ec-8c82c1daf4d1 req-54f790fc-550f-4d6f-b5ea-1e151eceab67 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received event network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.633 239942 DEBUG oslo_concurrency.lockutils [req-02ff35ea-be73-4dcb-b5ec-8c82c1daf4d1 req-54f790fc-550f-4d6f-b5ea-1e151eceab67 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.634 239942 DEBUG oslo_concurrency.lockutils [req-02ff35ea-be73-4dcb-b5ec-8c82c1daf4d1 req-54f790fc-550f-4d6f-b5ea-1e151eceab67 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.634 239942 DEBUG oslo_concurrency.lockutils [req-02ff35ea-be73-4dcb-b5ec-8c82c1daf4d1 req-54f790fc-550f-4d6f-b5ea-1e151eceab67 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.634 239942 DEBUG nova.compute.manager [req-02ff35ea-be73-4dcb-b5ec-8c82c1daf4d1 req-54f790fc-550f-4d6f-b5ea-1e151eceab67 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] No waiting events found dispatching network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.634 239942 WARNING nova.compute.manager [req-02ff35ea-be73-4dcb-b5ec-8c82c1daf4d1 req-54f790fc-550f-4d6f-b5ea-1e151eceab67 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Received unexpected event network-vif-plugged-9bfb8d4f-c12b-4a91-950a-4519f14d6508 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.639 239942 INFO nova.scheduler.client.report [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Deleted allocations for instance a7e679f6-843b-49b7-8455-d5ed363e1b37#033[00m
Jan 30 23:56:00 np0005603435 nova_compute[239938]: 2026-01-31 04:56:00.868 239942 DEBUG oslo_concurrency.lockutils [None req-a88779a0-ef60-4876-b4fb-c96d8e814a03 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a7e679f6-843b-49b7-8455-d5ed363e1b37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 137 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 7.1 KiB/s wr, 135 op/s
Jan 30 23:56:03 np0005603435 nova_compute[239938]: 2026-01-31 04:56:03.027 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 156 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 8.5 MiB/s wr, 276 op/s
Jan 30 23:56:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2496102450' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2496102450' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:04 np0005603435 nova_compute[239938]: 2026-01-31 04:56:04.941 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:04 np0005603435 nova_compute[239938]: 2026-01-31 04:56:04.942 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Jan 30 23:56:04 np0005603435 nova_compute[239938]: 2026-01-31 04:56:04.962 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:56:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Jan 30 23:56:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.049 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.050 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.060 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.061 239942 INFO nova.compute.claims [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.171 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.184 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059288580' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059288580' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 202 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 14 MiB/s wr, 211 op/s
Jan 30 23:56:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531780523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.743 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.748 239942 DEBUG nova.compute.provider_tree [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.771 239942 DEBUG nova.scheduler.client.report [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.794 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.795 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.836 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.836 239942 DEBUG nova.network.neutron [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.856 239942 INFO nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.873 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.941 239942 INFO nova.virt.block_device [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Booting with volume 62354df7-8617-4e98-bf68-88376e1103f9 at /dev/vda#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.976 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835350.975457, 52b7c210-2041-4375-8361-693e4d450c12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.976 239942 INFO nova.compute.manager [-] [instance: 52b7c210-2041-4375-8361-693e4d450c12] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:56:05 np0005603435 nova_compute[239938]: 2026-01-31 04:56:05.994 239942 DEBUG nova.compute.manager [None req-d6d56440-c7c7-4c02-99ba-592a9915ae0a - - - - - -] [instance: 52b7c210-2041-4375-8361-693e4d450c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.071 239942 DEBUG nova.policy [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '27f1a6fb472c4c5fa2286d0fa48dca34', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b39f0e168b54a4b8f976894d21361e6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.086 239942 DEBUG os_brick.utils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.088 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.102 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.103 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[9b725454-7681-4067-a311-2e4c20df152a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.105 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.115 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.115 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3f7fc7-e7d9-4831-8dcd-cefc3441bac4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.116 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.127 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.127 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb7d3b0-6180-4358-84dc-197de8d83f47]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.128 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[64fa1879-8bb7-413c-a2ba-9882d8e71012]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.129 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.154 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.158 239942 DEBUG os_brick.initiator.connectors.lightos [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.158 239942 DEBUG os_brick.initiator.connectors.lightos [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.159 239942 DEBUG os_brick.initiator.connectors.lightos [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.159 239942 DEBUG os_brick.utils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:56:06 np0005603435 nova_compute[239938]: 2026-01-31 04:56:06.159 239942 DEBUG nova.virt.block_device [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Updating existing volume attachment record: 90afbe39-c762-48c4-890b-b5a92d0e0908 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:56:06
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'backups', 'images']
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:56:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:56:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1372979680' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:56:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:56:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Jan 30 23:56:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Jan 30 23:56:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.176 239942 DEBUG nova.network.neutron [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Successfully created port: f44c4abb-008f-4b8d-abcd-08643ef9fdd3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.265 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.267 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.267 239942 INFO nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Creating image(s)#033[00m
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.268 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.268 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Ensure instance console log exists: /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.268 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.268 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:07 np0005603435 nova_compute[239938]: 2026-01-31 04:56:07.269 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3119459166' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 202 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 14 MiB/s wr, 363 op/s
Jan 30 23:56:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3119459166' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:56:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:56:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:56:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:56:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:56:08 np0005603435 nova_compute[239938]: 2026-01-31 04:56:08.030 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Jan 30 23:56:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Jan 30 23:56:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Jan 30 23:56:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:56:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:56:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:56:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:56:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:56:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2504870877' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2504870877' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:08 np0005603435 nova_compute[239938]: 2026-01-31 04:56:08.929 239942 DEBUG nova.network.neutron [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Successfully updated port: f44c4abb-008f-4b8d-abcd-08643ef9fdd3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:56:08 np0005603435 nova_compute[239938]: 2026-01-31 04:56:08.950 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:08 np0005603435 nova_compute[239938]: 2026-01-31 04:56:08.950 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquired lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:08 np0005603435 nova_compute[239938]: 2026-01-31 04:56:08.950 239942 DEBUG nova.network.neutron [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.009 239942 DEBUG nova.compute.manager [req-f0f0bf87-a001-452a-bc54-c882f9145431 req-e91b77a7-dfd4-4597-8196-1237847e1f18 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received event network-changed-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.009 239942 DEBUG nova.compute.manager [req-f0f0bf87-a001-452a-bc54-c882f9145431 req-e91b77a7-dfd4-4597-8196-1237847e1f18 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Refreshing instance network info cache due to event network-changed-f44c4abb-008f-4b8d-abcd-08643ef9fdd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.010 239942 DEBUG oslo_concurrency.lockutils [req-f0f0bf87-a001-452a-bc54-c882f9145431 req-e91b77a7-dfd4-4597-8196-1237847e1f18 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.096 239942 DEBUG nova.network.neutron [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:56:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 202 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 7.4 MiB/s wr, 215 op/s
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.920 239942 DEBUG nova.network.neutron [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Updating instance_info_cache with network_info: [{"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.947 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Releasing lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.948 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Instance network_info: |[{"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.949 239942 DEBUG oslo_concurrency.lockutils [req-f0f0bf87-a001-452a-bc54-c882f9145431 req-e91b77a7-dfd4-4597-8196-1237847e1f18 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.950 239942 DEBUG nova.network.neutron [req-f0f0bf87-a001-452a-bc54-c882f9145431 req-e91b77a7-dfd4-4597-8196-1237847e1f18 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Refreshing network info cache for port f44c4abb-008f-4b8d-abcd-08643ef9fdd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.957 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Start _get_guest_xml network_info=[{"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '90afbe39-c762-48c4-890b-b5a92d0e0908', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-62354df7-8617-4e98-bf68-88376e1103f9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '62354df7-8617-4e98-bf68-88376e1103f9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6a64f744-98a9-4399-a0ab-14cc87ca066f', 'attached_at': '', 'detached_at': '', 'volume_id': '62354df7-8617-4e98-bf68-88376e1103f9', 'serial': '62354df7-8617-4e98-bf68-88376e1103f9'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.964 239942 WARNING nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.968 239942 DEBUG nova.virt.libvirt.host [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.969 239942 DEBUG nova.virt.libvirt.host [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.978 239942 DEBUG nova.virt.libvirt.host [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.979 239942 DEBUG nova.virt.libvirt.host [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.979 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.980 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.981 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.981 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.982 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.982 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.982 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.983 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.984 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.984 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.985 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:56:09 np0005603435 nova_compute[239938]: 2026-01-31 04:56:09.985 239942 DEBUG nova.virt.hardware [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.016 239942 DEBUG nova.storage.rbd_utils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 6a64f744-98a9-4399-a0ab-14cc87ca066f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.022 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.173 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:10 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:56:10 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2298106741' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.569 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.701 239942 DEBUG os_brick.encryptors [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'dce0a9fe-eced-4443-8021-5ff87b079eed', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-62354df7-8617-4e98-bf68-88376e1103f9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '62354df7-8617-4e98-bf68-88376e1103f9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6a64f744-98a9-4399-a0ab-14cc87ca066f', 'attached_at': '', 'detached_at': '', 'volume_id': '62354df7-8617-4e98-bf68-88376e1103f9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.704 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.722 239942 DEBUG barbicanclient.v1.secrets [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/dce0a9fe-eced-4443-8021-5ff87b079eed get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.723 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.760 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.761 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.803 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.804 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.838 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.840 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.887 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.888 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.924 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.924 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.964 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.965 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.991 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:10 np0005603435 nova_compute[239938]: 2026-01-31 04:56:10.992 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.020 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.021 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.070 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.071 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.096 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.097 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.118 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.119 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.135 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.136 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.158 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.159 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.161 239942 DEBUG nova.network.neutron [req-f0f0bf87-a001-452a-bc54-c882f9145431 req-e91b77a7-dfd4-4597-8196-1237847e1f18 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Updated VIF entry in instance network info cache for port f44c4abb-008f-4b8d-abcd-08643ef9fdd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.162 239942 DEBUG nova.network.neutron [req-f0f0bf87-a001-452a-bc54-c882f9145431 req-e91b77a7-dfd4-4597-8196-1237847e1f18 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Updating instance_info_cache with network_info: [{"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.180 239942 DEBUG oslo_concurrency.lockutils [req-f0f0bf87-a001-452a-bc54-c882f9145431 req-e91b77a7-dfd4-4597-8196-1237847e1f18 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Jan 30 23:56:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Jan 30 23:56:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.253 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.254 239942 INFO barbicanclient.base [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/dce0a9fe-eced-4443-8021-5ff87b079eed#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.279 239942 DEBUG barbicanclient.client [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.280 239942 DEBUG nova.virt.libvirt.host [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <volume>62354df7-8617-4e98-bf68-88376e1103f9</volume>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  </usage>
Jan 30 23:56:11 np0005603435 nova_compute[239938]: </secret>
Jan 30 23:56:11 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.312 239942 DEBUG nova.virt.libvirt.vif [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:56:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1634896652',display_name='tempest-TransferEncryptedVolumeTest-server-1634896652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1634896652',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEV56Jk6IDRxyXFlb7xWBOMScnav9Xc5tHSoNY1YUEwOZFWGs8M7XZsrLboufTVEeGeJR0pbnMty3oYNRNpoAOeyFHYNqJJ2N05DBEMeFPzOD6DLoY1LRALz+j5Rp4/1jQ==',key_name='tempest-TransferEncryptedVolumeTest-773774193',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-c5g7sdq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:56:05Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=6a64f744-98a9-4399-a0ab-14cc87ca066f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.313 239942 DEBUG nova.network.os_vif_util [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.314 239942 DEBUG nova.network.os_vif_util [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:88:92,bridge_name='br-int',has_traffic_filtering=True,id=f44c4abb-008f-4b8d-abcd-08643ef9fdd3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44c4abb-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.316 239942 DEBUG nova.objects.instance [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a64f744-98a9-4399-a0ab-14cc87ca066f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.329 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <uuid>6a64f744-98a9-4399-a0ab-14cc87ca066f</uuid>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <name>instance-00000014</name>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1634896652</nova:name>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:56:09</nova:creationTime>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <nova:user uuid="27f1a6fb472c4c5fa2286d0fa48dca34">tempest-TransferEncryptedVolumeTest-483286292-project-member</nova:user>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <nova:project uuid="9b39f0e168b54a4b8f976894d21361e6">tempest-TransferEncryptedVolumeTest-483286292</nova:project>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <nova:port uuid="f44c4abb-008f-4b8d-abcd-08643ef9fdd3">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <entry name="serial">6a64f744-98a9-4399-a0ab-14cc87ca066f</entry>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <entry name="uuid">6a64f744-98a9-4399-a0ab-14cc87ca066f</entry>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/6a64f744-98a9-4399-a0ab-14cc87ca066f_disk.config">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-62354df7-8617-4e98-bf68-88376e1103f9">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <serial>62354df7-8617-4e98-bf68-88376e1103f9</serial>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <encryption format="luks">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:        <secret type="passphrase" uuid="6e34f9f5-e17c-4f93-9ca0-bf5dc6607a99"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      </encryption>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:95:88:92"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <target dev="tapf44c4abb-00"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f/console.log" append="off"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:56:11 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:56:11 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:56:11 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:56:11 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.330 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Preparing to wait for external event network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.331 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.331 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.331 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.332 239942 DEBUG nova.virt.libvirt.vif [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:56:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1634896652',display_name='tempest-TransferEncryptedVolumeTest-server-1634896652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1634896652',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEV56Jk6IDRxyXFlb7xWBOMScnav9Xc5tHSoNY1YUEwOZFWGs8M7XZsrLboufTVEeGeJR0pbnMty3oYNRNpoAOeyFHYNqJJ2N05DBEMeFPzOD6DLoY1LRALz+j5Rp4/1jQ==',key_name='tempest-TransferEncryptedVolumeTest-773774193',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-c5g7sdq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:56:05Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=6a64f744-98a9-4399-a0ab-14cc87ca066f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.333 239942 DEBUG nova.network.os_vif_util [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.334 239942 DEBUG nova.network.os_vif_util [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:88:92,bridge_name='br-int',has_traffic_filtering=True,id=f44c4abb-008f-4b8d-abcd-08643ef9fdd3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44c4abb-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.334 239942 DEBUG os_vif [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:88:92,bridge_name='br-int',has_traffic_filtering=True,id=f44c4abb-008f-4b8d-abcd-08643ef9fdd3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44c4abb-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.335 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.336 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.336 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.340 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.341 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf44c4abb-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.341 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf44c4abb-00, col_values=(('external_ids', {'iface-id': 'f44c4abb-008f-4b8d-abcd-08643ef9fdd3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:88:92', 'vm-uuid': '6a64f744-98a9-4399-a0ab-14cc87ca066f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.343 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:11 np0005603435 NetworkManager[49097]: <info>  [1769835371.3446] manager: (tapf44c4abb-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.345 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.350 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.351 239942 INFO os_vif [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:88:92,bridge_name='br-int',has_traffic_filtering=True,id=f44c4abb-008f-4b8d-abcd-08643ef9fdd3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44c4abb-00')#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.404 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.405 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.405 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No VIF found with MAC fa:16:3e:95:88:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.406 239942 INFO nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Using config drive#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.434 239942 DEBUG nova.storage.rbd_utils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 6a64f744-98a9-4399-a0ab-14cc87ca066f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 202 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 3.7 KiB/s wr, 206 op/s
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.709 239942 INFO nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Creating config drive at /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f/disk.config#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.712 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzyvtgbmf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.828 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzyvtgbmf" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.850 239942 DEBUG nova.storage.rbd_utils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 6a64f744-98a9-4399-a0ab-14cc87ca066f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.853 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f/disk.config 6a64f744-98a9-4399-a0ab-14cc87ca066f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.988 239942 DEBUG oslo_concurrency.processutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f/disk.config 6a64f744-98a9-4399-a0ab-14cc87ca066f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:11 np0005603435 nova_compute[239938]: 2026-01-31 04:56:11.989 239942 INFO nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Deleting local config drive /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f/disk.config because it was imported into RBD.#033[00m
Jan 30 23:56:12 np0005603435 kernel: tapf44c4abb-00: entered promiscuous mode
Jan 30 23:56:12 np0005603435 NetworkManager[49097]: <info>  [1769835372.0440] manager: (tapf44c4abb-00): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Jan 30 23:56:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:12Z|00198|binding|INFO|Claiming lport f44c4abb-008f-4b8d-abcd-08643ef9fdd3 for this chassis.
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.046 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:12Z|00199|binding|INFO|f44c4abb-008f-4b8d-abcd-08643ef9fdd3: Claiming fa:16:3e:95:88:92 10.100.0.14
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.057 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:88:92 10.100.0.14'], port_security=['fa:16:3e:95:88:92 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6a64f744-98a9-4399-a0ab-14cc87ca066f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a10d9666-b672-4619-83b7-22dc781b5b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b39f0e168b54a4b8f976894d21361e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31116304-b672-4fa0-88a2-3aca5935fb40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21f14c68-4084-427c-b05e-592b1db029c6, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=f44c4abb-008f-4b8d-abcd-08643ef9fdd3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.059 156017 INFO neutron.agent.ovn.metadata.agent [-] Port f44c4abb-008f-4b8d-abcd-08643ef9fdd3 in datapath a10d9666-b672-4619-83b7-22dc781b5b5b bound to our chassis#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.062 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a10d9666-b672-4619-83b7-22dc781b5b5b#033[00m
Jan 30 23:56:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:12Z|00200|binding|INFO|Setting lport f44c4abb-008f-4b8d-abcd-08643ef9fdd3 ovn-installed in OVS
Jan 30 23:56:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:12Z|00201|binding|INFO|Setting lport f44c4abb-008f-4b8d-abcd-08643ef9fdd3 up in Southbound
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.066 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:12 np0005603435 systemd-udevd[265770]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.072 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.076 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[70272d93-e18f-422b-a1a8-1069a227c6d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.077 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa10d9666-b1 in ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.079 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa10d9666-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.080 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6e582f8b-e77d-4922-be41-e019ad9a7786]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.081 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[72c529e2-d7e9-464f-8dfe-c748f0e6789b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 systemd-machined[208030]: New machine qemu-20-instance-00000014.
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.089 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[ed39cc33-b481-4aab-90fb-b7af2e52eeac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 NetworkManager[49097]: <info>  [1769835372.0906] device (tapf44c4abb-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:56:12 np0005603435 NetworkManager[49097]: <info>  [1769835372.0911] device (tapf44c4abb-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:56:12 np0005603435 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.111 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e9106440-b437-495f-8c74-ef1b8339725e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.129 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4370e2-abee-40f8-9445-34e5285133c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.133 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[776d2a32-b586-4108-8953-68ecff2f8cee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 NetworkManager[49097]: <info>  [1769835372.1347] manager: (tapa10d9666-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Jan 30 23:56:12 np0005603435 systemd-udevd[265774]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.165 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0408f0-3191-4abe-b40d-cd87263797a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.168 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[50020d2d-0f3c-492c-910f-1f21a67bba48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 NetworkManager[49097]: <info>  [1769835372.1864] device (tapa10d9666-b0): carrier: link connected
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.191 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[1de2e844-50b5-4bd3-8ef6-d22c33510b3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.202 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[836d0468-d667-4c1e-b20c-42176c9811a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa10d9666-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c0:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442658, 'reachable_time': 24072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265803, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.220 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd5a9fa-7264-4901-abfd-31d5dd25a526]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:c0da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442658, 'tstamp': 442658}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265804, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.231 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4b235e00-a4da-481c-8158-db306a5506fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa10d9666-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c0:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442658, 'reachable_time': 24072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265805, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.255 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4c9fb1-d717-4272-8a09-835c6d6dc8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.307 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a3bc2d-16b4-4fa9-9005-3d92fd0ce14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.308 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa10d9666-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.308 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.309 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa10d9666-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.310 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:12 np0005603435 NetworkManager[49097]: <info>  [1769835372.3114] manager: (tapa10d9666-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Jan 30 23:56:12 np0005603435 kernel: tapa10d9666-b0: entered promiscuous mode
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.314 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa10d9666-b0, col_values=(('external_ids', {'iface-id': 'b5040674-bbd1-4dc9-b2e1-14712cb60315'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:12Z|00202|binding|INFO|Releasing lport b5040674-bbd1-4dc9-b2e1-14712cb60315 from this chassis (sb_readonly=0)
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.315 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.322 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.323 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.324 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c51a1191-7290-44cc-b0f4-d9ab395c50b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.325 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-a10d9666-b672-4619-83b7-22dc781b5b5b
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID a10d9666-b672-4619-83b7-22dc781b5b5b
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:56:12 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:12.326 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'env', 'PROCESS_TAG=haproxy-a10d9666-b672-4619-83b7-22dc781b5b5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a10d9666-b672-4619-83b7-22dc781b5b5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.500 239942 DEBUG nova.compute.manager [req-dfa5ac0d-86e7-4713-96d9-cf5dee0ba2ae req-5cdbf55a-d341-494a-a999-39073b01a4cc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received event network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.501 239942 DEBUG oslo_concurrency.lockutils [req-dfa5ac0d-86e7-4713-96d9-cf5dee0ba2ae req-5cdbf55a-d341-494a-a999-39073b01a4cc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.501 239942 DEBUG oslo_concurrency.lockutils [req-dfa5ac0d-86e7-4713-96d9-cf5dee0ba2ae req-5cdbf55a-d341-494a-a999-39073b01a4cc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.501 239942 DEBUG oslo_concurrency.lockutils [req-dfa5ac0d-86e7-4713-96d9-cf5dee0ba2ae req-5cdbf55a-d341-494a-a999-39073b01a4cc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.502 239942 DEBUG nova.compute.manager [req-dfa5ac0d-86e7-4713-96d9-cf5dee0ba2ae req-5cdbf55a-d341-494a-a999-39073b01a4cc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Processing event network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:56:12 np0005603435 podman[265873]: 2026-01-31 04:56:12.675679911 +0000 UTC m=+0.049069658 container create 7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:56:12 np0005603435 systemd[1]: Started libpod-conmon-7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c.scope.
Jan 30 23:56:12 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:12 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ccd10ed2e0ec943494d9ef693d32030b86da4081cbf3748cdd79f6d87d321f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:12 np0005603435 podman[265873]: 2026-01-31 04:56:12.729859724 +0000 UTC m=+0.103249491 container init 7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:56:12 np0005603435 podman[265873]: 2026-01-31 04:56:12.733745459 +0000 UTC m=+0.107135206 container start 7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:56:12 np0005603435 podman[265873]: 2026-01-31 04:56:12.649248666 +0000 UTC m=+0.022638433 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:56:12 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[265888]: [NOTICE]   (265892) : New worker (265894) forked
Jan 30 23:56:12 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[265888]: [NOTICE]   (265892) : Loading success.
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.997 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835357.9959662, a7e679f6-843b-49b7-8455-d5ed363e1b37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:12 np0005603435 nova_compute[239938]: 2026-01-31 04:56:12.997 239942 INFO nova.compute.manager [-] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:56:13 np0005603435 nova_compute[239938]: 2026-01-31 04:56:13.020 239942 DEBUG nova.compute.manager [None req-5b67c5a4-358e-4d40-a391-9e9813496a64 - - - - - -] [instance: a7e679f6-843b-49b7-8455-d5ed363e1b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Jan 30 23:56:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Jan 30 23:56:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Jan 30 23:56:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 202 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.8 KiB/s wr, 63 op/s
Jan 30 23:56:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Jan 30 23:56:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Jan 30 23:56:14 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.544 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835374.5439613, 6a64f744-98a9-4399-a0ab-14cc87ca066f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.544 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] VM Started (Lifecycle Event)#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.547 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.552 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.556 239942 INFO nova.virt.libvirt.driver [-] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Instance spawned successfully.#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.557 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.565 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.568 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.576 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.576 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.577 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.577 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.578 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.578 239942 DEBUG nova.virt.libvirt.driver [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.607 239942 DEBUG nova.compute.manager [req-a17cf619-dfa2-4144-bff5-b3e2c3fa12a7 req-63c86bc7-61ac-4dc9-994d-72cbfeb6e54a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received event network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.608 239942 DEBUG oslo_concurrency.lockutils [req-a17cf619-dfa2-4144-bff5-b3e2c3fa12a7 req-63c86bc7-61ac-4dc9-994d-72cbfeb6e54a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.608 239942 DEBUG oslo_concurrency.lockutils [req-a17cf619-dfa2-4144-bff5-b3e2c3fa12a7 req-63c86bc7-61ac-4dc9-994d-72cbfeb6e54a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.608 239942 DEBUG oslo_concurrency.lockutils [req-a17cf619-dfa2-4144-bff5-b3e2c3fa12a7 req-63c86bc7-61ac-4dc9-994d-72cbfeb6e54a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.609 239942 DEBUG nova.compute.manager [req-a17cf619-dfa2-4144-bff5-b3e2c3fa12a7 req-63c86bc7-61ac-4dc9-994d-72cbfeb6e54a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] No waiting events found dispatching network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.609 239942 WARNING nova.compute.manager [req-a17cf619-dfa2-4144-bff5-b3e2c3fa12a7 req-63c86bc7-61ac-4dc9-994d-72cbfeb6e54a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received unexpected event network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 for instance with vm_state building and task_state spawning.#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.620 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.620 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835374.5442007, 6a64f744-98a9-4399-a0ab-14cc87ca066f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.621 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.645 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.648 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835374.550543, 6a64f744-98a9-4399-a0ab-14cc87ca066f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.648 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.654 239942 INFO nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Took 7.39 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.655 239942 DEBUG nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.665 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.667 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.692 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.719 239942 INFO nova.compute.manager [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Took 9.70 seconds to build instance.#033[00m
Jan 30 23:56:14 np0005603435 nova_compute[239938]: 2026-01-31 04:56:14.736 239942 DEBUG oslo_concurrency.lockutils [None req-0aeee13d-9261-47b4-a311-44c57799736d 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:15 np0005603435 nova_compute[239938]: 2026-01-31 04:56:15.202 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 202 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.7 KiB/s wr, 117 op/s
Jan 30 23:56:16 np0005603435 nova_compute[239938]: 2026-01-31 04:56:16.345 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:16 np0005603435 nova_compute[239938]: 2026-01-31 04:56:16.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:16 np0005603435 nova_compute[239938]: 2026-01-31 04:56:16.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 30 23:56:16 np0005603435 nova_compute[239938]: 2026-01-31 04:56:16.952 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.270965980569033e-06 of space, bias 1.0, pg target 0.0018812897941707098 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021859791903412885 of space, bias 1.0, pg target 0.6557937571023865 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.973708507301613e-07 of space, bias 1.0, pg target 0.00020921125521904837 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666992949098087 of space, bias 1.0, pg target 0.2000097884729426 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.926357028089542e-07 of space, bias 4.0, pg target 0.0008311628433707451 quantized to 16 (current 16)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:56:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 237 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 2.8 MiB/s wr, 296 op/s
Jan 30 23:56:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Jan 30 23:56:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Jan 30 23:56:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Jan 30 23:56:18 np0005603435 nova_compute[239938]: 2026-01-31 04:56:18.952 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 237 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 237 op/s
Jan 30 23:56:19 np0005603435 nova_compute[239938]: 2026-01-31 04:56:19.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:20 np0005603435 nova_compute[239938]: 2026-01-31 04:56:20.205 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:20 np0005603435 nova_compute[239938]: 2026-01-31 04:56:20.247 239942 DEBUG nova.compute.manager [req-cd5455b5-3957-4f69-a575-4ef8e2862a05 req-71e54ecb-deb1-4f98-9d0f-40f3e1f444b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received event network-changed-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:20 np0005603435 nova_compute[239938]: 2026-01-31 04:56:20.247 239942 DEBUG nova.compute.manager [req-cd5455b5-3957-4f69-a575-4ef8e2862a05 req-71e54ecb-deb1-4f98-9d0f-40f3e1f444b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Refreshing instance network info cache due to event network-changed-f44c4abb-008f-4b8d-abcd-08643ef9fdd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:56:20 np0005603435 nova_compute[239938]: 2026-01-31 04:56:20.248 239942 DEBUG oslo_concurrency.lockutils [req-cd5455b5-3957-4f69-a575-4ef8e2862a05 req-71e54ecb-deb1-4f98-9d0f-40f3e1f444b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:20 np0005603435 nova_compute[239938]: 2026-01-31 04:56:20.248 239942 DEBUG oslo_concurrency.lockutils [req-cd5455b5-3957-4f69-a575-4ef8e2862a05 req-71e54ecb-deb1-4f98-9d0f-40f3e1f444b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:20 np0005603435 nova_compute[239938]: 2026-01-31 04:56:20.249 239942 DEBUG nova.network.neutron [req-cd5455b5-3957-4f69-a575-4ef8e2862a05 req-71e54ecb-deb1-4f98-9d0f-40f3e1f444b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Refreshing network info cache for port f44c4abb-008f-4b8d-abcd-08643ef9fdd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:56:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Jan 30 23:56:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Jan 30 23:56:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Jan 30 23:56:20 np0005603435 nova_compute[239938]: 2026-01-31 04:56:20.882 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1433166220' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1433166220' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.103 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.104 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.120 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.214 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.215 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.221 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.222 239942 INFO nova.compute.claims [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.345 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.368 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.398 239942 DEBUG nova.network.neutron [req-cd5455b5-3957-4f69-a575-4ef8e2862a05 req-71e54ecb-deb1-4f98-9d0f-40f3e1f444b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Updated VIF entry in instance network info cache for port f44c4abb-008f-4b8d-abcd-08643ef9fdd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.400 239942 DEBUG nova.network.neutron [req-cd5455b5-3957-4f69-a575-4ef8e2862a05 req-71e54ecb-deb1-4f98-9d0f-40f3e1f444b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Updating instance_info_cache with network_info: [{"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.442 239942 DEBUG oslo_concurrency.lockutils [req-cd5455b5-3957-4f69-a575-4ef8e2862a05 req-71e54ecb-deb1-4f98-9d0f-40f3e1f444b3 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-6a64f744-98a9-4399-a0ab-14cc87ca066f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 248 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.0 MiB/s wr, 300 op/s
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.889 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:56:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4119278885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.934 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.945 239942 DEBUG nova.compute.provider_tree [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.961 239942 DEBUG nova.scheduler.client.report [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.983 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:21 np0005603435 nova_compute[239938]: 2026-01-31 04:56:21.985 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.032 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.033 239942 DEBUG nova.network.neutron [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.050 239942 INFO nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.065 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.104 239942 INFO nova.virt.block_device [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Booting with volume 45fe01a6-1d82-456a-b502-568386cb1d48 at /dev/vda#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.239 239942 DEBUG os_brick.utils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.241 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.259 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.259 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[84ae353b-886e-4c60-a79a-8cac415a70ce]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.261 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.274 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.275 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[2b92bd52-64c0-47b9-9fd6-14b4d227cc94]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.277 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.287 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.288 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f5c247-089c-41dd-98e0-7efd8afd78c0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.290 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e59970-4b36-4cae-8394-113f18f99e7a]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.290 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.307 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.310 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.311 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.312 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.313 239942 DEBUG os_brick.utils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.313 239942 DEBUG nova.virt.block_device [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Updating existing volume attachment record: 4831a096-b549-4fff-8fcb-efc550d31270 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.386 239942 DEBUG nova.policy [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e10f13b98624406985dec6a5dcc391c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:56:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Jan 30 23:56:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Jan 30 23:56:22 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Jan 30 23:56:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873038603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873038603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 30 23:56:22 np0005603435 nova_compute[239938]: 2026-01-31 04:56:22.910 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:56:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/662293261' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:56:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.262 239942 DEBUG nova.network.neutron [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Successfully created port: eea6eb19-8395-4c5d-adcb-9b91f5ac0310 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:56:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 248 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 747 KiB/s wr, 155 op/s
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.572 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.574 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.576 239942 INFO nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Creating image(s)#033[00m
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.578 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.578 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Ensure instance console log exists: /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.579 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.579 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.580 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197849146' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197849146' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.917 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:23 np0005603435 nova_compute[239938]: 2026-01-31 04:56:23.963 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.005 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.006 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.007 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.007 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.008 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.191 239942 DEBUG nova.network.neutron [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Successfully updated port: eea6eb19-8395-4c5d-adcb-9b91f5ac0310 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.207 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.208 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquired lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.208 239942 DEBUG nova.network.neutron [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.273 239942 DEBUG nova.compute.manager [req-9fdf2f0d-fac1-49be-9ac3-80178c863e76 req-58a6eeec-8d71-4ae7-b7ca-b62c5cfc4d9e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received event network-changed-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.273 239942 DEBUG nova.compute.manager [req-9fdf2f0d-fac1-49be-9ac3-80178c863e76 req-58a6eeec-8d71-4ae7-b7ca-b62c5cfc4d9e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Refreshing instance network info cache due to event network-changed-eea6eb19-8395-4c5d-adcb-9b91f5ac0310. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.273 239942 DEBUG oslo_concurrency.lockutils [req-9fdf2f0d-fac1-49be-9ac3-80178c863e76 req-58a6eeec-8d71-4ae7-b7ca-b62c5cfc4d9e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/924347957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.580 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.583 239942 DEBUG nova.network.neutron [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.646 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.646 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.819 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.820 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4213MB free_disk=59.98776103835553GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.820 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.820 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.904 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 6a64f744-98a9-4399-a0ab-14cc87ca066f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.905 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 2d5c8c52-0781-43ca-9fd1-58e205d20e4b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.905 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.905 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:56:24 np0005603435 nova_compute[239938]: 2026-01-31 04:56:24.961 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.207 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.213 239942 DEBUG nova.network.neutron [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Updating instance_info_cache with network_info: [{"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.234 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Releasing lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.235 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Instance network_info: |[{"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.235 239942 DEBUG oslo_concurrency.lockutils [req-9fdf2f0d-fac1-49be-9ac3-80178c863e76 req-58a6eeec-8d71-4ae7-b7ca-b62c5cfc4d9e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.235 239942 DEBUG nova.network.neutron [req-9fdf2f0d-fac1-49be-9ac3-80178c863e76 req-58a6eeec-8d71-4ae7-b7ca-b62c5cfc4d9e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Refreshing network info cache for port eea6eb19-8395-4c5d-adcb-9b91f5ac0310 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.238 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Start _get_guest_xml network_info=[{"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '4831a096-b549-4fff-8fcb-efc550d31270', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-45fe01a6-1d82-456a-b502-568386cb1d48', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '45fe01a6-1d82-456a-b502-568386cb1d48', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2d5c8c52-0781-43ca-9fd1-58e205d20e4b', 'attached_at': '', 'detached_at': '', 'volume_id': '45fe01a6-1d82-456a-b502-568386cb1d48', 'serial': '45fe01a6-1d82-456a-b502-568386cb1d48'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.242 239942 WARNING nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.245 239942 DEBUG nova.virt.libvirt.host [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.245 239942 DEBUG nova.virt.libvirt.host [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.252 239942 DEBUG nova.virt.libvirt.host [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.252 239942 DEBUG nova.virt.libvirt.host [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.253 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.253 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.253 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.254 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.254 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.254 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.254 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.255 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.255 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.255 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.255 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.256 239942 DEBUG nova.virt.hardware [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.275 239942 DEBUG nova.storage.rbd_utils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 2d5c8c52-0781-43ca-9fd1-58e205d20e4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.278 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 248 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 2.3 MiB/s wr, 183 op/s
Jan 30 23:56:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2680344976' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.553 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.557 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.581 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.610 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.611 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:25 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:25Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:88:92 10.100.0.14
Jan 30 23:56:25 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:25Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:88:92 10.100.0.14
Jan 30 23:56:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:56:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892759232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:56:25 np0005603435 nova_compute[239938]: 2026-01-31 04:56:25.839 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.212 239942 DEBUG nova.virt.libvirt.vif [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2090634245',display_name='tempest-TestVolumeBootPattern-server-2090634245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2090634245',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-82s9ye08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:56:22Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=2d5c8c52-0781-43ca-9fd1-58e205d20e4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.213 239942 DEBUG nova.network.os_vif_util [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.213 239942 DEBUG nova.network.os_vif_util [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:4d:aa,bridge_name='br-int',has_traffic_filtering=True,id=eea6eb19-8395-4c5d-adcb-9b91f5ac0310,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea6eb19-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.214 239942 DEBUG nova.objects.instance [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d5c8c52-0781-43ca-9fd1-58e205d20e4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.291 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <uuid>2d5c8c52-0781-43ca-9fd1-58e205d20e4b</uuid>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <name>instance-00000015</name>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestVolumeBootPattern-server-2090634245</nova:name>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:56:25</nova:creationTime>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <nova:user uuid="e10f13b98624406985dec6a5dcc391c7">tempest-TestVolumeBootPattern-1782423025-project-member</nova:user>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <nova:project uuid="e332802dd6cf49c59f8ed38e70addb0e">tempest-TestVolumeBootPattern-1782423025</nova:project>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <nova:port uuid="eea6eb19-8395-4c5d-adcb-9b91f5ac0310">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <entry name="serial">2d5c8c52-0781-43ca-9fd1-58e205d20e4b</entry>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <entry name="uuid">2d5c8c52-0781-43ca-9fd1-58e205d20e4b</entry>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/2d5c8c52-0781-43ca-9fd1-58e205d20e4b_disk.config">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-45fe01a6-1d82-456a-b502-568386cb1d48">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <serial>45fe01a6-1d82-456a-b502-568386cb1d48</serial>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:71:4d:aa"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <target dev="tapeea6eb19-83"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b/console.log" append="off"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:56:26 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:56:26 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:56:26 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:56:26 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.292 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Preparing to wait for external event network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.292 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.292 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.293 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.294 239942 DEBUG nova.virt.libvirt.vif [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2090634245',display_name='tempest-TestVolumeBootPattern-server-2090634245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2090634245',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-82s9ye08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:56:22Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=2d5c8c52-0781-43ca-9fd1-58e205d20e4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.294 239942 DEBUG nova.network.os_vif_util [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.295 239942 DEBUG nova.network.os_vif_util [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:4d:aa,bridge_name='br-int',has_traffic_filtering=True,id=eea6eb19-8395-4c5d-adcb-9b91f5ac0310,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea6eb19-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.295 239942 DEBUG os_vif [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:4d:aa,bridge_name='br-int',has_traffic_filtering=True,id=eea6eb19-8395-4c5d-adcb-9b91f5ac0310,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea6eb19-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.296 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.296 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.297 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.302 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.302 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeea6eb19-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.303 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeea6eb19-83, col_values=(('external_ids', {'iface-id': 'eea6eb19-8395-4c5d-adcb-9b91f5ac0310', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:4d:aa', 'vm-uuid': '2d5c8c52-0781-43ca-9fd1-58e205d20e4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.305 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:26 np0005603435 NetworkManager[49097]: <info>  [1769835386.3068] manager: (tapeea6eb19-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.306 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.311 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.312 239942 INFO os_vif [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:4d:aa,bridge_name='br-int',has_traffic_filtering=True,id=eea6eb19-8395-4c5d-adcb-9b91f5ac0310,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea6eb19-83')#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.432 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.433 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.433 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No VIF found with MAC fa:16:3e:71:4d:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.434 239942 INFO nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Using config drive#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.462 239942 DEBUG nova.storage.rbd_utils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 2d5c8c52-0781-43ca-9fd1-58e205d20e4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.535 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.535 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.578 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.587 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.589 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.804 239942 INFO nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Creating config drive at /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b/disk.config#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.809 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjothmewu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.932 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjothmewu" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.963 239942 DEBUG nova.storage.rbd_utils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 2d5c8c52-0781-43ca-9fd1-58e205d20e4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:26 np0005603435 nova_compute[239938]: 2026-01-31 04:56:26.966 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b/disk.config 2d5c8c52-0781-43ca-9fd1-58e205d20e4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.106 239942 DEBUG oslo_concurrency.processutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b/disk.config 2d5c8c52-0781-43ca-9fd1-58e205d20e4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.106 239942 INFO nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Deleting local config drive /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b/disk.config because it was imported into RBD.#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.125 239942 DEBUG nova.network.neutron [req-9fdf2f0d-fac1-49be-9ac3-80178c863e76 req-58a6eeec-8d71-4ae7-b7ca-b62c5cfc4d9e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Updated VIF entry in instance network info cache for port eea6eb19-8395-4c5d-adcb-9b91f5ac0310. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.126 239942 DEBUG nova.network.neutron [req-9fdf2f0d-fac1-49be-9ac3-80178c863e76 req-58a6eeec-8d71-4ae7-b7ca-b62c5cfc4d9e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Updating instance_info_cache with network_info: [{"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.143 239942 DEBUG oslo_concurrency.lockutils [req-9fdf2f0d-fac1-49be-9ac3-80178c863e76 req-58a6eeec-8d71-4ae7-b7ca-b62c5cfc4d9e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:27 np0005603435 kernel: tapeea6eb19-83: entered promiscuous mode
Jan 30 23:56:27 np0005603435 NetworkManager[49097]: <info>  [1769835387.1528] manager: (tapeea6eb19-83): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Jan 30 23:56:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:27Z|00203|binding|INFO|Claiming lport eea6eb19-8395-4c5d-adcb-9b91f5ac0310 for this chassis.
Jan 30 23:56:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:27Z|00204|binding|INFO|eea6eb19-8395-4c5d-adcb-9b91f5ac0310: Claiming fa:16:3e:71:4d:aa 10.100.0.14
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.155 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.164 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.165 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:4d:aa 10.100.0.14'], port_security=['fa:16:3e:71:4d:aa 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2d5c8c52-0781-43ca-9fd1-58e205d20e4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5925722f-3c3e-42bd-9802-ef7105d62a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=eea6eb19-8395-4c5d-adcb-9b91f5ac0310) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:56:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:27Z|00205|binding|INFO|Setting lport eea6eb19-8395-4c5d-adcb-9b91f5ac0310 up in Southbound
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.167 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:27Z|00206|binding|INFO|Setting lport eea6eb19-8395-4c5d-adcb-9b91f5ac0310 ovn-installed in OVS
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.169 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.170 156017 INFO neutron.agent.ovn.metadata.agent [-] Port eea6eb19-8395-4c5d-adcb-9b91f5ac0310 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 bound to our chassis#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.175 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.187 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1fab755a-2930-44c5-80d0-a53d966fdecf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.188 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b0cf2db-21 in ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:56:27 np0005603435 systemd-machined[208030]: New machine qemu-21-instance-00000015.
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.190 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b0cf2db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.190 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f5fc2e-39f9-4b4f-8659-1b83e27cb880]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.192 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[639f78ee-9f12-418b-b7ec-af64e4c2a1f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.204 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[35105248-6268-444a-a0c1-552fee988d73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 systemd-udevd[266099]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.228 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[82a47c65-e1e9-4852-9b4d-1cb20ee0f2ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 NetworkManager[49097]: <info>  [1769835387.2405] device (tapeea6eb19-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:56:27 np0005603435 NetworkManager[49097]: <info>  [1769835387.2416] device (tapeea6eb19-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.259 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2ca8d4-584a-40fb-bf27-35c45a694c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.265 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[66ef945e-de99-44cd-9931-efe16587a18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 NetworkManager[49097]: <info>  [1769835387.2664] manager: (tap5b0cf2db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Jan 30 23:56:27 np0005603435 systemd-udevd[266107]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.296 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[d71adc30-0aa4-47a6-aa01-b6c9b5927ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.300 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4907ac-f9cf-4edd-a5f6-ae2afd1fd7c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 NetworkManager[49097]: <info>  [1769835387.3212] device (tap5b0cf2db-20): carrier: link connected
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.327 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[304d2e77-0ec0-4cd4-8b68-ca039ad2dc17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.344 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1aefd011-b7b5-4b4b-85ba-f03b24c1bd69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444172, 'reachable_time': 23835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266129, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.350 239942 DEBUG nova.compute.manager [req-0a37c031-c5ad-4fcf-a7ac-2d2304a52aed req-8c81c732-7bec-4ea5-b7cf-a7650d2d9d7d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received event network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.350 239942 DEBUG oslo_concurrency.lockutils [req-0a37c031-c5ad-4fcf-a7ac-2d2304a52aed req-8c81c732-7bec-4ea5-b7cf-a7650d2d9d7d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.351 239942 DEBUG oslo_concurrency.lockutils [req-0a37c031-c5ad-4fcf-a7ac-2d2304a52aed req-8c81c732-7bec-4ea5-b7cf-a7650d2d9d7d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.351 239942 DEBUG oslo_concurrency.lockutils [req-0a37c031-c5ad-4fcf-a7ac-2d2304a52aed req-8c81c732-7bec-4ea5-b7cf-a7650d2d9d7d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.351 239942 DEBUG nova.compute.manager [req-0a37c031-c5ad-4fcf-a7ac-2d2304a52aed req-8c81c732-7bec-4ea5-b7cf-a7650d2d9d7d c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Processing event network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.358 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a10a273f-78c4-44e9-b64b-08d30a69e2d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:f719'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444172, 'tstamp': 444172}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266130, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.375 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1779ca-490b-4091-9778-a041e2fe405e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444172, 'reachable_time': 23835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266131, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.406 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[eb331c92-303c-4faf-8d1f-424137307fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 308 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 861 KiB/s rd, 8.8 MiB/s wr, 301 op/s
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.461 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7be0c0-125e-466f-a99d-ec1e6df373b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.462 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.463 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.463 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.465 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 NetworkManager[49097]: <info>  [1769835387.4664] manager: (tap5b0cf2db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 30 23:56:27 np0005603435 kernel: tap5b0cf2db-20: entered promiscuous mode
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.468 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.471 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:27Z|00207|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.472 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.475 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.476 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3cf109-3324-46f4-91ef-d1245d909dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.476 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.477 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'env', 'PROCESS_TAG=haproxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.484 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Jan 30 23:56:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Jan 30 23:56:27 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.701 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835387.7010965, 2d5c8c52-0781-43ca-9fd1-58e205d20e4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.702 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] VM Started (Lifecycle Event)#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.703 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.710 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.715 239942 INFO nova.virt.libvirt.driver [-] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Instance spawned successfully.#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.715 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.721 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.725 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.733 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.733 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.734 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.734 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.734 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.735 239942 DEBUG nova.virt.libvirt.driver [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.740 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.741 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835387.7012186, 2d5c8c52-0781-43ca-9fd1-58e205d20e4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.741 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:56:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:27.763 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.764 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.765 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.769 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835387.7097356, 2d5c8c52-0781-43ca-9fd1-58e205d20e4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.769 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.787 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.790 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.795 239942 INFO nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Took 4.22 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.795 239942 DEBUG nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.821 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:56:27 np0005603435 podman[266205]: 2026-01-31 04:56:27.849124655 +0000 UTC m=+0.042324034 container create 6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.850 239942 INFO nova.compute.manager [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Took 6.66 seconds to build instance.#033[00m
Jan 30 23:56:27 np0005603435 nova_compute[239938]: 2026-01-31 04:56:27.865 239942 DEBUG oslo_concurrency.lockutils [None req-1b79e3c4-f087-43ee-a076-24fd6aef8b0e e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:27 np0005603435 systemd[1]: Started libpod-conmon-6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f.scope.
Jan 30 23:56:27 np0005603435 podman[266205]: 2026-01-31 04:56:27.827403995 +0000 UTC m=+0.020603374 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:56:27 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:27 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c3cd6557d48bbc0b4a1b3b0ad3edd9e1e4f3c94e8fd3636dae4727317c0007b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:27 np0005603435 podman[266205]: 2026-01-31 04:56:27.958504485 +0000 UTC m=+0.151703904 container init 6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:56:27 np0005603435 podman[266205]: 2026-01-31 04:56:27.964200465 +0000 UTC m=+0.157399844 container start 6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:56:27 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[266221]: [NOTICE]   (266225) : New worker (266227) forked
Jan 30 23:56:27 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[266221]: [NOTICE]   (266225) : Loading success.
Jan 30 23:56:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:28.031 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:56:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:28.032 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Jan 30 23:56:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Jan 30 23:56:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.122 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.147 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Triggering sync for uuid 6a64f744-98a9-4399-a0ab-14cc87ca066f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.147 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Triggering sync for uuid 2d5c8c52-0781-43ca-9fd1-58e205d20e4b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.148 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.148 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.148 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.148 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.177 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.179 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 308 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 846 KiB/s rd, 9.6 MiB/s wr, 214 op/s
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.489 239942 DEBUG nova.compute.manager [req-2e70dac6-5f0f-4d4a-be4b-e1ef86afe9fc req-5ee6ee28-e5b8-43ad-8e4d-306d2f84d248 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received event network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.489 239942 DEBUG oslo_concurrency.lockutils [req-2e70dac6-5f0f-4d4a-be4b-e1ef86afe9fc req-5ee6ee28-e5b8-43ad-8e4d-306d2f84d248 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.489 239942 DEBUG oslo_concurrency.lockutils [req-2e70dac6-5f0f-4d4a-be4b-e1ef86afe9fc req-5ee6ee28-e5b8-43ad-8e4d-306d2f84d248 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.489 239942 DEBUG oslo_concurrency.lockutils [req-2e70dac6-5f0f-4d4a-be4b-e1ef86afe9fc req-5ee6ee28-e5b8-43ad-8e4d-306d2f84d248 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.489 239942 DEBUG nova.compute.manager [req-2e70dac6-5f0f-4d4a-be4b-e1ef86afe9fc req-5ee6ee28-e5b8-43ad-8e4d-306d2f84d248 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] No waiting events found dispatching network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:56:29 np0005603435 nova_compute[239938]: 2026-01-31 04:56:29.490 239942 WARNING nova.compute.manager [req-2e70dac6-5f0f-4d4a-be4b-e1ef86afe9fc req-5ee6ee28-e5b8-43ad-8e4d-306d2f84d248 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received unexpected event network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:56:30 np0005603435 nova_compute[239938]: 2026-01-31 04:56:30.209 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:31 np0005603435 podman[266236]: 2026-01-31 04:56:31.104797971 +0000 UTC m=+0.066413153 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 30 23:56:31 np0005603435 podman[266237]: 2026-01-31 04:56:31.109433244 +0000 UTC m=+0.071859766 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:56:31 np0005603435 nova_compute[239938]: 2026-01-31 04:56:31.305 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 317 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 8.7 MiB/s wr, 242 op/s
Jan 30 23:56:31 np0005603435 nova_compute[239938]: 2026-01-31 04:56:31.742 239942 DEBUG nova.compute.manager [req-e0223b46-0d2d-462a-b718-22b6a6da871b req-de12fb16-d3d2-4066-921b-d878fa83b2c5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received event network-changed-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:31 np0005603435 nova_compute[239938]: 2026-01-31 04:56:31.742 239942 DEBUG nova.compute.manager [req-e0223b46-0d2d-462a-b718-22b6a6da871b req-de12fb16-d3d2-4066-921b-d878fa83b2c5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Refreshing instance network info cache due to event network-changed-eea6eb19-8395-4c5d-adcb-9b91f5ac0310. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:56:31 np0005603435 nova_compute[239938]: 2026-01-31 04:56:31.743 239942 DEBUG oslo_concurrency.lockutils [req-e0223b46-0d2d-462a-b718-22b6a6da871b req-de12fb16-d3d2-4066-921b-d878fa83b2c5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:31 np0005603435 nova_compute[239938]: 2026-01-31 04:56:31.743 239942 DEBUG oslo_concurrency.lockutils [req-e0223b46-0d2d-462a-b718-22b6a6da871b req-de12fb16-d3d2-4066-921b-d878fa83b2c5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:31 np0005603435 nova_compute[239938]: 2026-01-31 04:56:31.743 239942 DEBUG nova.network.neutron [req-e0223b46-0d2d-462a-b718-22b6a6da871b req-de12fb16-d3d2-4066-921b-d878fa83b2c5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Refreshing network info cache for port eea6eb19-8395-4c5d-adcb-9b91f5ac0310 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:56:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Jan 30 23:56:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Jan 30 23:56:32 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Jan 30 23:56:32 np0005603435 nova_compute[239938]: 2026-01-31 04:56:32.871 239942 DEBUG nova.network.neutron [req-e0223b46-0d2d-462a-b718-22b6a6da871b req-de12fb16-d3d2-4066-921b-d878fa83b2c5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Updated VIF entry in instance network info cache for port eea6eb19-8395-4c5d-adcb-9b91f5ac0310. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:56:32 np0005603435 nova_compute[239938]: 2026-01-31 04:56:32.871 239942 DEBUG nova.network.neutron [req-e0223b46-0d2d-462a-b718-22b6a6da871b req-de12fb16-d3d2-4066-921b-d878fa83b2c5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Updating instance_info_cache with network_info: [{"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:32 np0005603435 nova_compute[239938]: 2026-01-31 04:56:32.895 239942 DEBUG oslo_concurrency.lockutils [req-e0223b46-0d2d-462a-b718-22b6a6da871b req-de12fb16-d3d2-4066-921b-d878fa83b2c5 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-2d5c8c52-0781-43ca-9fd1-58e205d20e4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2895401322' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2895401322' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/608891538' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/608891538' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 317 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 627 KiB/s wr, 216 op/s
Jan 30 23:56:35 np0005603435 nova_compute[239938]: 2026-01-31 04:56:35.212 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 317 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 479 KiB/s wr, 209 op/s
Jan 30 23:56:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Jan 30 23:56:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Jan 30 23:56:35 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.175 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.176 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.176 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.176 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.177 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.181 239942 INFO nova.compute.manager [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Terminating instance#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.185 239942 DEBUG nova.compute.manager [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:56:36 np0005603435 kernel: tapf44c4abb-00 (unregistering): left promiscuous mode
Jan 30 23:56:36 np0005603435 NetworkManager[49097]: <info>  [1769835396.2494] device (tapf44c4abb-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:56:36 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:36Z|00208|binding|INFO|Releasing lport f44c4abb-008f-4b8d-abcd-08643ef9fdd3 from this chassis (sb_readonly=0)
Jan 30 23:56:36 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:36Z|00209|binding|INFO|Setting lport f44c4abb-008f-4b8d-abcd-08643ef9fdd3 down in Southbound
Jan 30 23:56:36 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:36Z|00210|binding|INFO|Removing iface tapf44c4abb-00 ovn-installed in OVS
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.260 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.264 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:88:92 10.100.0.14'], port_security=['fa:16:3e:95:88:92 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6a64f744-98a9-4399-a0ab-14cc87ca066f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a10d9666-b672-4619-83b7-22dc781b5b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b39f0e168b54a4b8f976894d21361e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31116304-b672-4fa0-88a2-3aca5935fb40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21f14c68-4084-427c-b05e-592b1db029c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=f44c4abb-008f-4b8d-abcd-08643ef9fdd3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.266 156017 INFO neutron.agent.ovn.metadata.agent [-] Port f44c4abb-008f-4b8d-abcd-08643ef9fdd3 in datapath a10d9666-b672-4619-83b7-22dc781b5b5b unbound from our chassis#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.268 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a10d9666-b672-4619-83b7-22dc781b5b5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.269 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[948e4eb1-526e-48db-8ad9-8c0b4ef7e044]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.270 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b namespace which is not needed anymore#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.275 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:36 np0005603435 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Jan 30 23:56:36 np0005603435 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 14.274s CPU time.
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.306 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:36 np0005603435 systemd-machined[208030]: Machine qemu-20-instance-00000014 terminated.
Jan 30 23:56:36 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[265888]: [NOTICE]   (265892) : haproxy version is 2.8.14-c23fe91
Jan 30 23:56:36 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[265888]: [NOTICE]   (265892) : path to executable is /usr/sbin/haproxy
Jan 30 23:56:36 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[265888]: [WARNING]  (265892) : Exiting Master process...
Jan 30 23:56:36 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[265888]: [ALERT]    (265892) : Current worker (265894) exited with code 143 (Terminated)
Jan 30 23:56:36 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[265888]: [WARNING]  (265892) : All workers exited. Exiting... (0)
Jan 30 23:56:36 np0005603435 systemd[1]: libpod-7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c.scope: Deactivated successfully.
Jan 30 23:56:36 np0005603435 podman[266300]: 2026-01-31 04:56:36.405133027 +0000 UTC m=+0.038786538 container died 7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.423 239942 INFO nova.virt.libvirt.driver [-] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Instance destroyed successfully.#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.424 239942 DEBUG nova.objects.instance [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lazy-loading 'resources' on Instance uuid 6a64f744-98a9-4399-a0ab-14cc87ca066f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:56:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c-userdata-shm.mount: Deactivated successfully.
Jan 30 23:56:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a8ccd10ed2e0ec943494d9ef693d32030b86da4081cbf3748cdd79f6d87d321f-merged.mount: Deactivated successfully.
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.437 239942 DEBUG nova.virt.libvirt.vif [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:56:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1634896652',display_name='tempest-TransferEncryptedVolumeTest-server-1634896652',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1634896652',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEV56Jk6IDRxyXFlb7xWBOMScnav9Xc5tHSoNY1YUEwOZFWGs8M7XZsrLboufTVEeGeJR0pbnMty3oYNRNpoAOeyFHYNqJJ2N05DBEMeFPzOD6DLoY1LRALz+j5Rp4/1jQ==',key_name='tempest-TransferEncryptedVolumeTest-773774193',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:56:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-c5g7sdq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:56:14Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=6a64f744-98a9-4399-a0ab-14cc87ca066f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.439 239942 DEBUG nova.network.os_vif_util [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "address": "fa:16:3e:95:88:92", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44c4abb-00", "ovs_interfaceid": "f44c4abb-008f-4b8d-abcd-08643ef9fdd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.440 239942 DEBUG nova.network.os_vif_util [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:88:92,bridge_name='br-int',has_traffic_filtering=True,id=f44c4abb-008f-4b8d-abcd-08643ef9fdd3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44c4abb-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.440 239942 DEBUG os_vif [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:88:92,bridge_name='br-int',has_traffic_filtering=True,id=f44c4abb-008f-4b8d-abcd-08643ef9fdd3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44c4abb-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.443 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.443 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf44c4abb-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:36 np0005603435 podman[266300]: 2026-01-31 04:56:36.444675542 +0000 UTC m=+0.078329063 container cleanup 7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.446 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:36 np0005603435 systemd[1]: libpod-conmon-7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c.scope: Deactivated successfully.
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.451 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.453 239942 INFO os_vif [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:88:92,bridge_name='br-int',has_traffic_filtering=True,id=f44c4abb-008f-4b8d-abcd-08643ef9fdd3,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44c4abb-00')#033[00m
Jan 30 23:56:36 np0005603435 podman[266338]: 2026-01-31 04:56:36.50111845 +0000 UTC m=+0.040225713 container remove 7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.504 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c1ce7a-9d64-43fd-9a8a-0c1405e3c524]: (4, ('Sat Jan 31 04:56:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b (7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c)\n7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c\nSat Jan 31 04:56:36 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b (7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c)\n7b16919fcfc7a6229d0f5a000445c04c388c4c46d4a0379a6a0e262aa814df1c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.506 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c18289e4-891f-4225-91e9-c13121a02244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.507 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa10d9666-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:36 np0005603435 kernel: tapa10d9666-b0: left promiscuous mode
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.508 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.512 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5662deee-538a-4ca9-985b-dbf08b99765f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.514 239942 DEBUG nova.compute.manager [req-53c9f7c0-efe2-4996-8f50-ad2ae80dec37 req-dbb662a1-317b-4a88-a522-fdbc576bc511 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received event network-vif-unplugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.515 239942 DEBUG oslo_concurrency.lockutils [req-53c9f7c0-efe2-4996-8f50-ad2ae80dec37 req-dbb662a1-317b-4a88-a522-fdbc576bc511 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.516 239942 DEBUG oslo_concurrency.lockutils [req-53c9f7c0-efe2-4996-8f50-ad2ae80dec37 req-dbb662a1-317b-4a88-a522-fdbc576bc511 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.516 239942 DEBUG oslo_concurrency.lockutils [req-53c9f7c0-efe2-4996-8f50-ad2ae80dec37 req-dbb662a1-317b-4a88-a522-fdbc576bc511 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.517 239942 DEBUG nova.compute.manager [req-53c9f7c0-efe2-4996-8f50-ad2ae80dec37 req-dbb662a1-317b-4a88-a522-fdbc576bc511 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] No waiting events found dispatching network-vif-unplugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.521 239942 DEBUG nova.compute.manager [req-53c9f7c0-efe2-4996-8f50-ad2ae80dec37 req-dbb662a1-317b-4a88-a522-fdbc576bc511 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received event network-vif-unplugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.523 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.523 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7a405d65-2505-4400-b362-438517635ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.524 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[46769ce1-6364-4e9a-9d32-c3b6e8b2ab32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.535 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[40ea5374-e558-4d59-a757-91fc587a5bac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442652, 'reachable_time': 33958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266368, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.537 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:56:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:36.537 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b1c0ea-241d-4fb6-973e-e7265deda6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:36 np0005603435 systemd[1]: run-netns-ovnmeta\x2da10d9666\x2db672\x2d4619\x2d83b7\x2d22dc781b5b5b.mount: Deactivated successfully.
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.593 239942 INFO nova.virt.libvirt.driver [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Deleting instance files /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f_del#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.594 239942 INFO nova.virt.libvirt.driver [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Deletion of /var/lib/nova/instances/6a64f744-98a9-4399-a0ab-14cc87ca066f_del complete#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.640 239942 INFO nova.compute.manager [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.641 239942 DEBUG oslo.service.loopingcall [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.641 239942 DEBUG nova.compute.manager [-] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:56:36 np0005603435 nova_compute[239938]: 2026-01-31 04:56:36.641 239942 DEBUG nova.network.neutron [-] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:56:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:56:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:56:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:56:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:56:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:56:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:56:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4116661997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4116661997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:37 np0005603435 nova_compute[239938]: 2026-01-31 04:56:37.371 239942 DEBUG nova.network.neutron [-] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:37 np0005603435 nova_compute[239938]: 2026-01-31 04:56:37.386 239942 INFO nova.compute.manager [-] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Took 0.74 seconds to deallocate network for instance.#033[00m
Jan 30 23:56:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 317 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 491 KiB/s wr, 259 op/s
Jan 30 23:56:37 np0005603435 nova_compute[239938]: 2026-01-31 04:56:37.562 239942 INFO nova.compute.manager [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:56:37 np0005603435 nova_compute[239938]: 2026-01-31 04:56:37.604 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:37 np0005603435 nova_compute[239938]: 2026-01-31 04:56:37.605 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:37 np0005603435 nova_compute[239938]: 2026-01-31 04:56:37.682 239942 DEBUG oslo_concurrency.processutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175166564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.210 239942 DEBUG oslo_concurrency.processutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.218 239942 DEBUG nova.compute.provider_tree [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.245 239942 DEBUG nova.scheduler.client.report [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.279 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786716390' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786716390' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.318 239942 INFO nova.scheduler.client.report [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Deleted allocations for instance 6a64f744-98a9-4399-a0ab-14cc87ca066f#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.393 239942 DEBUG oslo_concurrency.lockutils [None req-e1fe9d33-253b-48c5-9f96-cc740de97ef3 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.595 239942 DEBUG nova.compute.manager [req-dd426e80-39da-456b-8ed2-78d32ffb793a req-1f93227d-d2b0-4f67-88d8-e75d696ad3f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received event network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.596 239942 DEBUG oslo_concurrency.lockutils [req-dd426e80-39da-456b-8ed2-78d32ffb793a req-1f93227d-d2b0-4f67-88d8-e75d696ad3f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.596 239942 DEBUG oslo_concurrency.lockutils [req-dd426e80-39da-456b-8ed2-78d32ffb793a req-1f93227d-d2b0-4f67-88d8-e75d696ad3f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.597 239942 DEBUG oslo_concurrency.lockutils [req-dd426e80-39da-456b-8ed2-78d32ffb793a req-1f93227d-d2b0-4f67-88d8-e75d696ad3f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "6a64f744-98a9-4399-a0ab-14cc87ca066f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.598 239942 DEBUG nova.compute.manager [req-dd426e80-39da-456b-8ed2-78d32ffb793a req-1f93227d-d2b0-4f67-88d8-e75d696ad3f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] No waiting events found dispatching network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.598 239942 WARNING nova.compute.manager [req-dd426e80-39da-456b-8ed2-78d32ffb793a req-1f93227d-d2b0-4f67-88d8-e75d696ad3f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received unexpected event network-vif-plugged-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:56:38 np0005603435 nova_compute[239938]: 2026-01-31 04:56:38.599 239942 DEBUG nova.compute.manager [req-dd426e80-39da-456b-8ed2-78d32ffb793a req-1f93227d-d2b0-4f67-88d8-e75d696ad3f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Received event network-vif-deleted-f44c4abb-008f-4b8d-abcd-08643ef9fdd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:39 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:39Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:4d:aa 10.100.0.14
Jan 30 23:56:39 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:39Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:4d:aa 10.100.0.14
Jan 30 23:56:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 317 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 25 KiB/s wr, 114 op/s
Jan 30 23:56:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1918750769' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:39 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1918750769' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:40 np0005603435 nova_compute[239938]: 2026-01-31 04:56:40.213 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:41 np0005603435 nova_compute[239938]: 2026-01-31 04:56:41.446 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 323 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 693 KiB/s wr, 156 op/s
Jan 30 23:56:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 576 KiB/s rd, 3.2 MiB/s wr, 215 op/s
Jan 30 23:56:45 np0005603435 nova_compute[239938]: 2026-01-31 04:56:45.215 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 2.6 MiB/s wr, 175 op/s
Jan 30 23:56:46 np0005603435 nova_compute[239938]: 2026-01-31 04:56:46.447 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Jan 30 23:56:46 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Jan 30 23:56:46 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Jan 30 23:56:46 np0005603435 nova_compute[239938]: 2026-01-31 04:56:46.827 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "46143cbc-0ca2-4cea-bc49-98861e82728b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:46 np0005603435 nova_compute[239938]: 2026-01-31 04:56:46.828 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:46 np0005603435 nova_compute[239938]: 2026-01-31 04:56:46.848 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:56:46 np0005603435 nova_compute[239938]: 2026-01-31 04:56:46.932 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:46 np0005603435 nova_compute[239938]: 2026-01-31 04:56:46.933 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:46 np0005603435 nova_compute[239938]: 2026-01-31 04:56:46.942 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:56:46 np0005603435 nova_compute[239938]: 2026-01-31 04:56:46.942 239942 INFO nova.compute.claims [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.070 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 473 KiB/s rd, 2.8 MiB/s wr, 152 op/s
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.508 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.509 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.509 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.510 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.511 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.513 239942 INFO nova.compute.manager [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Terminating instance#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.515 239942 DEBUG nova.compute.manager [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:56:47 np0005603435 kernel: tapeea6eb19-83 (unregistering): left promiscuous mode
Jan 30 23:56:47 np0005603435 NetworkManager[49097]: <info>  [1769835407.5723] device (tapeea6eb19-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:56:47 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:47Z|00211|binding|INFO|Releasing lport eea6eb19-8395-4c5d-adcb-9b91f5ac0310 from this chassis (sb_readonly=0)
Jan 30 23:56:47 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:47Z|00212|binding|INFO|Setting lport eea6eb19-8395-4c5d-adcb-9b91f5ac0310 down in Southbound
Jan 30 23:56:47 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:47Z|00213|binding|INFO|Removing iface tapeea6eb19-83 ovn-installed in OVS
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.584 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:47.590 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:4d:aa 10.100.0.14'], port_security=['fa:16:3e:71:4d:aa 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2d5c8c52-0781-43ca-9fd1-58e205d20e4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5925722f-3c3e-42bd-9802-ef7105d62a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=eea6eb19-8395-4c5d-adcb-9b91f5ac0310) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:56:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:47.592 156017 INFO neutron.agent.ovn.metadata.agent [-] Port eea6eb19-8395-4c5d-adcb-9b91f5ac0310 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:56:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:47.594 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:56:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:47.595 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[124d761a-4216-4f11-9e43-eced639f941c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:47 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:47.595 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace which is not needed anymore#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.603 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:47 np0005603435 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 30 23:56:47 np0005603435 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 12.248s CPU time.
Jan 30 23:56:47 np0005603435 systemd-machined[208030]: Machine qemu-21-instance-00000015 terminated.
Jan 30 23:56:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:47 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553772397' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.675 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.680 239942 DEBUG nova.compute.provider_tree [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.694 239942 DEBUG nova.scheduler.client.report [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.717 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.718 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.759 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.759 239942 DEBUG nova.network.neutron [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.767 239942 INFO nova.virt.libvirt.driver [-] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Instance destroyed successfully.#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.767 239942 DEBUG nova.objects.instance [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'resources' on Instance uuid 2d5c8c52-0781-43ca-9fd1-58e205d20e4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.774 239942 INFO nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.779 239942 DEBUG nova.virt.libvirt.vif [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2090634245',display_name='tempest-TestVolumeBootPattern-server-2090634245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2090634245',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:56:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-82s9ye08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:56:27Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=2d5c8c52-0781-43ca-9fd1-58e205d20e4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:56:47 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[266221]: [NOTICE]   (266225) : haproxy version is 2.8.14-c23fe91
Jan 30 23:56:47 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[266221]: [NOTICE]   (266225) : path to executable is /usr/sbin/haproxy
Jan 30 23:56:47 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[266221]: [WARNING]  (266225) : Exiting Master process...
Jan 30 23:56:47 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[266221]: [WARNING]  (266225) : Exiting Master process...
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.779 239942 DEBUG nova.network.os_vif_util [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "address": "fa:16:3e:71:4d:aa", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeea6eb19-83", "ovs_interfaceid": "eea6eb19-8395-4c5d-adcb-9b91f5ac0310", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.780 239942 DEBUG nova.network.os_vif_util [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:4d:aa,bridge_name='br-int',has_traffic_filtering=True,id=eea6eb19-8395-4c5d-adcb-9b91f5ac0310,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea6eb19-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:47 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[266221]: [ALERT]    (266225) : Current worker (266227) exited with code 143 (Terminated)
Jan 30 23:56:47 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[266221]: [WARNING]  (266225) : All workers exited. Exiting... (0)
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.780 239942 DEBUG os_vif [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:4d:aa,bridge_name='br-int',has_traffic_filtering=True,id=eea6eb19-8395-4c5d-adcb-9b91f5ac0310,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea6eb19-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.782 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.782 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeea6eb19-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:47 np0005603435 systemd[1]: libpod-6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f.scope: Deactivated successfully.
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.783 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.784 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.787 239942 INFO os_vif [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:4d:aa,bridge_name='br-int',has_traffic_filtering=True,id=eea6eb19-8395-4c5d-adcb-9b91f5ac0310,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeea6eb19-83')#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.789 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:56:47 np0005603435 podman[266440]: 2026-01-31 04:56:47.790515427 +0000 UTC m=+0.100070524 container died 6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.826 239942 INFO nova.virt.block_device [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Booting with volume 62354df7-8617-4e98-bf68-88376e1103f9 at /dev/vda#033[00m
Jan 30 23:56:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f-userdata-shm.mount: Deactivated successfully.
Jan 30 23:56:47 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8c3cd6557d48bbc0b4a1b3b0ad3edd9e1e4f3c94e8fd3636dae4727317c0007b-merged.mount: Deactivated successfully.
Jan 30 23:56:47 np0005603435 podman[266440]: 2026-01-31 04:56:47.93808896 +0000 UTC m=+0.247644037 container cleanup 6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 30 23:56:47 np0005603435 systemd[1]: libpod-conmon-6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f.scope: Deactivated successfully.
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.988 239942 DEBUG os_brick.utils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:56:47 np0005603435 nova_compute[239938]: 2026-01-31 04:56:47.990 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.000 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.000 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[af823e64-8761-4048-afe8-10e502936c39]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.001 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.007 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.007 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[07e686ef-061c-4ce5-8818-28f70e1c8487]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.008 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.021 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.021 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[647ba1f0-2c93-4456-a618-c2c81b7f63bc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.022 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[bf47b3a4-7249-4ec9-aa43-2be2b4cb06e7]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.023 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.043 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.045 239942 DEBUG os_brick.initiator.connectors.lightos [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.046 239942 DEBUG os_brick.initiator.connectors.lightos [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.046 239942 DEBUG os_brick.initiator.connectors.lightos [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.046 239942 DEBUG os_brick.utils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.046 239942 DEBUG nova.virt.block_device [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updating existing volume attachment record: acd59aaa-3630-4022-a52d-1c19af8e458b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.114 239942 DEBUG nova.policy [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '27f1a6fb472c4c5fa2286d0fa48dca34', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b39f0e168b54a4b8f976894d21361e6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:56:48 np0005603435 podman[266499]: 2026-01-31 04:56:48.123066536 +0000 UTC m=+0.163346959 container remove 6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.129 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d422eff4-61a2-477a-9477-214bf0829101]: (4, ('Sat Jan 31 04:56:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f)\n6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f\nSat Jan 31 04:56:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f)\n6c3263e4ba073baf6b6acdd3ffda102ddde5ab297f8a2abefaeee61f6035363f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.131 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b411b0-a54a-4975-832b-b84928594175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.132 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.188 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:48 np0005603435 kernel: tap5b0cf2db-20: left promiscuous mode
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.201 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.204 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc64659-6462-47b4-9bc1-0af98cd6a897]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.221 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0ca445-918f-4717-a53d-f089d62595f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.223 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d2e84c-0578-4db7-bffc-a36c1c0bb760]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.245 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b369346a-79b1-4251-a814-b61690ed3eac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444165, 'reachable_time': 31231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266521, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.247 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:56:48 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:48.247 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[91916304-9cbb-48a4-8d7e-4d748ead60e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:48 np0005603435 systemd[1]: run-netns-ovnmeta\x2d5b0cf2db\x2d2e35\x2d41fa\x2d9783\x2d30f0fe6ea7a3.mount: Deactivated successfully.
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.289 239942 DEBUG nova.compute.manager [req-fbeeac29-2eb7-4095-be48-2f7f6fde9c26 req-ab62570b-f209-4d6f-9b2f-33222695ec9f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received event network-vif-unplugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.290 239942 DEBUG oslo_concurrency.lockutils [req-fbeeac29-2eb7-4095-be48-2f7f6fde9c26 req-ab62570b-f209-4d6f-9b2f-33222695ec9f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.291 239942 DEBUG oslo_concurrency.lockutils [req-fbeeac29-2eb7-4095-be48-2f7f6fde9c26 req-ab62570b-f209-4d6f-9b2f-33222695ec9f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.291 239942 DEBUG oslo_concurrency.lockutils [req-fbeeac29-2eb7-4095-be48-2f7f6fde9c26 req-ab62570b-f209-4d6f-9b2f-33222695ec9f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.291 239942 DEBUG nova.compute.manager [req-fbeeac29-2eb7-4095-be48-2f7f6fde9c26 req-ab62570b-f209-4d6f-9b2f-33222695ec9f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] No waiting events found dispatching network-vif-unplugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.292 239942 DEBUG nova.compute.manager [req-fbeeac29-2eb7-4095-be48-2f7f6fde9c26 req-ab62570b-f209-4d6f-9b2f-33222695ec9f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received event network-vif-unplugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.461 239942 INFO nova.virt.libvirt.driver [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Deleting instance files /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b_del#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.462 239942 INFO nova.virt.libvirt.driver [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Deletion of /var/lib/nova/instances/2d5c8c52-0781-43ca-9fd1-58e205d20e4b_del complete#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.724 239942 INFO nova.compute.manager [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.725 239942 DEBUG oslo.service.loopingcall [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.726 239942 DEBUG nova.compute.manager [-] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:56:48 np0005603435 nova_compute[239938]: 2026-01-31 04:56:48.726 239942 DEBUG nova.network.neutron [-] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:56:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:56:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/542133351' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.279 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.281 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.282 239942 INFO nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Creating image(s)#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.282 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.283 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Ensure instance console log exists: /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.284 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.284 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.285 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 437 KiB/s rd, 2.6 MiB/s wr, 140 op/s
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.690 239942 DEBUG nova.network.neutron [-] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.711 239942 INFO nova.compute.manager [-] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Took 0.98 seconds to deallocate network for instance.#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.771 239942 DEBUG nova.compute.manager [req-9b9ba7ba-2468-4728-8e5f-de6c1ff307eb req-0f36f7a1-87f9-45a3-b535-de6b589dec76 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received event network-vif-deleted-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.856 239942 INFO nova.compute.manager [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Took 0.14 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.872 239942 DEBUG nova.network.neutron [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Successfully created port: f4095bc2-be91-4b88-adee-fb762fd4a421 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.899 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.900 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:49 np0005603435 nova_compute[239938]: 2026-01-31 04:56:49.982 239942 DEBUG oslo_concurrency.processutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651696273' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651696273' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.218 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.455 239942 DEBUG nova.compute.manager [req-d50035d6-3174-40b5-9a57-b8106f1e7d61 req-652dca5f-03a4-4bbc-ad04-c7062014fd62 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received event network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.456 239942 DEBUG oslo_concurrency.lockutils [req-d50035d6-3174-40b5-9a57-b8106f1e7d61 req-652dca5f-03a4-4bbc-ad04-c7062014fd62 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.456 239942 DEBUG oslo_concurrency.lockutils [req-d50035d6-3174-40b5-9a57-b8106f1e7d61 req-652dca5f-03a4-4bbc-ad04-c7062014fd62 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.457 239942 DEBUG oslo_concurrency.lockutils [req-d50035d6-3174-40b5-9a57-b8106f1e7d61 req-652dca5f-03a4-4bbc-ad04-c7062014fd62 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.457 239942 DEBUG nova.compute.manager [req-d50035d6-3174-40b5-9a57-b8106f1e7d61 req-652dca5f-03a4-4bbc-ad04-c7062014fd62 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] No waiting events found dispatching network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.458 239942 WARNING nova.compute.manager [req-d50035d6-3174-40b5-9a57-b8106f1e7d61 req-652dca5f-03a4-4bbc-ad04-c7062014fd62 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Received unexpected event network-vif-plugged-eea6eb19-8395-4c5d-adcb-9b91f5ac0310 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:56:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020911260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.508 239942 DEBUG oslo_concurrency.processutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.515 239942 DEBUG nova.compute.provider_tree [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.543 239942 DEBUG nova.scheduler.client.report [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.573 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.611 239942 INFO nova.scheduler.client.report [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Deleted allocations for instance 2d5c8c52-0781-43ca-9fd1-58e205d20e4b#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.686 239942 DEBUG oslo_concurrency.lockutils [None req-3c3339c6-5f01-41d4-856a-e7206b1e258b e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "2d5c8c52-0781-43ca-9fd1-58e205d20e4b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.754 239942 DEBUG nova.network.neutron [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Successfully updated port: f4095bc2-be91-4b88-adee-fb762fd4a421 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.770 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.771 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquired lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.771 239942 DEBUG nova.network.neutron [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:56:50 np0005603435 nova_compute[239938]: 2026-01-31 04:56:50.933 239942 DEBUG nova.network.neutron [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.421 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835396.4200287, 6a64f744-98a9-4399-a0ab-14cc87ca066f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.422 239942 INFO nova.compute.manager [-] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:56:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 350 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.490 239942 DEBUG nova.compute.manager [None req-de2278ff-727e-4ce8-be1d-331518d443ed - - - - - -] [instance: 6a64f744-98a9-4399-a0ab-14cc87ca066f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.698 239942 DEBUG nova.network.neutron [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updating instance_info_cache with network_info: [{"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.717 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Releasing lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.718 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Instance network_info: |[{"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.722 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Start _get_guest_xml network_info=[{"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': 'acd59aaa-3630-4022-a52d-1c19af8e458b', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-62354df7-8617-4e98-bf68-88376e1103f9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '62354df7-8617-4e98-bf68-88376e1103f9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '46143cbc-0ca2-4cea-bc49-98861e82728b', 'attached_at': '', 'detached_at': '', 'volume_id': '62354df7-8617-4e98-bf68-88376e1103f9', 'serial': '62354df7-8617-4e98-bf68-88376e1103f9'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.727 239942 WARNING nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.733 239942 DEBUG nova.virt.libvirt.host [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.735 239942 DEBUG nova.virt.libvirt.host [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.739 239942 DEBUG nova.virt.libvirt.host [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.740 239942 DEBUG nova.virt.libvirt.host [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.740 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.741 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.742 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.742 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.743 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.743 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.743 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.744 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.744 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.745 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.745 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.746 239942 DEBUG nova.virt.hardware [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.781 239942 DEBUG nova.storage.rbd_utils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 46143cbc-0ca2-4cea-bc49-98861e82728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:51 np0005603435 nova_compute[239938]: 2026-01-31 04:56:51.785 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:51.931554) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835411931624, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2628, "num_deletes": 519, "total_data_size": 3295906, "memory_usage": 3365568, "flush_reason": "Manual Compaction"}
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835411944488, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3239560, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29002, "largest_seqno": 31629, "table_properties": {"data_size": 3227906, "index_size": 7124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 27877, "raw_average_key_size": 20, "raw_value_size": 3202282, "raw_average_value_size": 2339, "num_data_blocks": 308, "num_entries": 1369, "num_filter_entries": 1369, "num_deletions": 519, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769835258, "oldest_key_time": 1769835258, "file_creation_time": 1769835411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 12964 microseconds, and 5688 cpu microseconds.
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:51.944532) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3239560 bytes OK
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:51.944553) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:51.946825) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:51.946841) EVENT_LOG_v1 {"time_micros": 1769835411946836, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:51.946859) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3283663, prev total WAL file size 3283663, number of live WAL files 2.
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:51.947785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3163KB)], [62(8951KB)]
Jan 30 23:56:51 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835411947877, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12405525, "oldest_snapshot_seqno": -1}
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6082 keys, 10476410 bytes, temperature: kUnknown
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835412005992, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10476410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10429714, "index_size": 30394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 153256, "raw_average_key_size": 25, "raw_value_size": 10314315, "raw_average_value_size": 1695, "num_data_blocks": 1223, "num_entries": 6082, "num_filter_entries": 6082, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769835411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:52.006207) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10476410 bytes
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:52.009353) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.2 rd, 180.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.1) write-amplify(3.2) OK, records in: 7130, records dropped: 1048 output_compression: NoCompression
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:52.009389) EVENT_LOG_v1 {"time_micros": 1769835412009375, "job": 34, "event": "compaction_finished", "compaction_time_micros": 58176, "compaction_time_cpu_micros": 31837, "output_level": 6, "num_output_files": 1, "total_output_size": 10476410, "num_input_records": 7130, "num_output_records": 6082, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835412009761, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 30 23:56:52 np0005603435 podman[266710]: 2026-01-31 04:56:52.009003068 +0000 UTC m=+0.073671198 container create 37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_matsumoto, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835412010553, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:51.947521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:52.010627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:52.010637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:52.010639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:52.010641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:56:52.010643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:56:52 np0005603435 podman[266710]: 2026-01-31 04:56:51.969902145 +0000 UTC m=+0.034570365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:56:52 np0005603435 systemd[1]: Started libpod-conmon-37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264.scope.
Jan 30 23:56:52 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:52 np0005603435 podman[266710]: 2026-01-31 04:56:52.109563063 +0000 UTC m=+0.174231223 container init 37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:56:52 np0005603435 podman[266710]: 2026-01-31 04:56:52.116774609 +0000 UTC m=+0.181442759 container start 37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:56:52 np0005603435 podman[266710]: 2026-01-31 04:56:52.12129864 +0000 UTC m=+0.185966780 container attach 37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:56:52 np0005603435 silly_matsumoto[266744]: 167 167
Jan 30 23:56:52 np0005603435 systemd[1]: libpod-37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264.scope: Deactivated successfully.
Jan 30 23:56:52 np0005603435 podman[266710]: 2026-01-31 04:56:52.12621688 +0000 UTC m=+0.190885030 container died 37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:56:52 np0005603435 systemd[1]: var-lib-containers-storage-overlay-32d9b7a86751d4b7a1d41be16cc13f4e71edb47dc54fc5dbb74e0ba0040d308b-merged.mount: Deactivated successfully.
Jan 30 23:56:52 np0005603435 podman[266710]: 2026-01-31 04:56:52.173111335 +0000 UTC m=+0.237779485 container remove 37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:56:52 np0005603435 systemd[1]: libpod-conmon-37476092f5683fc5a07485b47f6f237735b2c626e1afc5426b7047fc5530c264.scope: Deactivated successfully.
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2078286614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:56:52 np0005603435 podman[266768]: 2026-01-31 04:56:52.349148023 +0000 UTC m=+0.048975407 container create 6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.354 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:52 np0005603435 systemd[1]: Started libpod-conmon-6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13.scope.
Jan 30 23:56:52 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:52 np0005603435 podman[266768]: 2026-01-31 04:56:52.331202325 +0000 UTC m=+0.031029689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:56:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1afdcb0f1071f8e6becb70b95b2bf0581b210a0d25abdfe177fa331518c592/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1afdcb0f1071f8e6becb70b95b2bf0581b210a0d25abdfe177fa331518c592/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1afdcb0f1071f8e6becb70b95b2bf0581b210a0d25abdfe177fa331518c592/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1afdcb0f1071f8e6becb70b95b2bf0581b210a0d25abdfe177fa331518c592/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:52 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1afdcb0f1071f8e6becb70b95b2bf0581b210a0d25abdfe177fa331518c592/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:52 np0005603435 podman[266768]: 2026-01-31 04:56:52.459419095 +0000 UTC m=+0.159246479 container init 6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jones, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:56:52 np0005603435 podman[266768]: 2026-01-31 04:56:52.467906512 +0000 UTC m=+0.167733876 container start 6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:56:52 np0005603435 podman[266768]: 2026-01-31 04:56:52.471852969 +0000 UTC m=+0.171680413 container attach 6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jones, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.511 239942 DEBUG os_brick.encryptors [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'd4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-62354df7-8617-4e98-bf68-88376e1103f9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '62354df7-8617-4e98-bf68-88376e1103f9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '46143cbc-0ca2-4cea-bc49-98861e82728b', 'attached_at': '', 'detached_at': '', 'volume_id': '62354df7-8617-4e98-bf68-88376e1103f9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.513 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.531 239942 DEBUG barbicanclient.v1.secrets [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.532 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.554 239942 DEBUG nova.compute.manager [req-e18d84d8-a04b-4f31-b730-75bf5f10a718 req-c9b6f8fd-f17d-4746-a98d-3c003779b988 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received event network-changed-f4095bc2-be91-4b88-adee-fb762fd4a421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.554 239942 DEBUG nova.compute.manager [req-e18d84d8-a04b-4f31-b730-75bf5f10a718 req-c9b6f8fd-f17d-4746-a98d-3c003779b988 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Refreshing instance network info cache due to event network-changed-f4095bc2-be91-4b88-adee-fb762fd4a421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.555 239942 DEBUG oslo_concurrency.lockutils [req-e18d84d8-a04b-4f31-b730-75bf5f10a718 req-c9b6f8fd-f17d-4746-a98d-3c003779b988 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.555 239942 DEBUG oslo_concurrency.lockutils [req-e18d84d8-a04b-4f31-b730-75bf5f10a718 req-c9b6f8fd-f17d-4746-a98d-3c003779b988 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.555 239942 DEBUG nova.network.neutron [req-e18d84d8-a04b-4f31-b730-75bf5f10a718 req-c9b6f8fd-f17d-4746-a98d-3c003779b988 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Refreshing network info cache for port f4095bc2-be91-4b88-adee-fb762fd4a421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.561 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.562 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.595 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.596 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.624 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.624 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.646 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.647 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.667 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.667 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.691 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.692 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.712 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.713 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.734 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.734 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.754 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.755 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.778 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.778 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.784 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.799 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.800 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.825 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.826 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.848 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.848 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.882 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.883 239942 INFO barbicanclient.base [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Calculated Secrets uuid ref: secrets/d4605ab2-aadf-4435-b3a7-3cc6ce9e4b5a#033[00m
Jan 30 23:56:52 np0005603435 elegant_jones[266786]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:56:52 np0005603435 elegant_jones[266786]: --> All data devices are unavailable
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.912 239942 DEBUG barbicanclient.client [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.913 239942 DEBUG nova.virt.libvirt.host [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <volume>62354df7-8617-4e98-bf68-88376e1103f9</volume>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  </usage>
Jan 30 23:56:52 np0005603435 nova_compute[239938]: </secret>
Jan 30 23:56:52 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 30 23:56:52 np0005603435 systemd[1]: libpod-6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13.scope: Deactivated successfully.
Jan 30 23:56:52 np0005603435 conmon[266786]: conmon 6ec3a1836e35aed5b204 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13.scope/container/memory.events
Jan 30 23:56:52 np0005603435 podman[266768]: 2026-01-31 04:56:52.931359527 +0000 UTC m=+0.631186881 container died 6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Jan 30 23:56:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.947 239942 DEBUG nova.virt.libvirt.vif [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-140451816',display_name='tempest-TransferEncryptedVolumeTest-server-140451816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-140451816',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEV56Jk6IDRxyXFlb7xWBOMScnav9Xc5tHSoNY1YUEwOZFWGs8M7XZsrLboufTVEeGeJR0pbnMty3oYNRNpoAOeyFHYNqJJ2N05DBEMeFPzOD6DLoY1LRALz+j5Rp4/1jQ==',key_name='tempest-TransferEncryptedVolumeTest-773774193',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-gi0t0ny2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:56:47Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=46143cbc-0ca2-4cea-bc49-98861e82728b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.949 239942 DEBUG nova.network.os_vif_util [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.950 239942 DEBUG nova.network.os_vif_util [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:02:58,bridge_name='br-int',has_traffic_filtering=True,id=f4095bc2-be91-4b88-adee-fb762fd4a421,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4095bc2-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.954 239942 DEBUG nova.objects.instance [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 46143cbc-0ca2-4cea-bc49-98861e82728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:56:52 np0005603435 systemd[1]: var-lib-containers-storage-overlay-bd1afdcb0f1071f8e6becb70b95b2bf0581b210a0d25abdfe177fa331518c592-merged.mount: Deactivated successfully.
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.971 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <uuid>46143cbc-0ca2-4cea-bc49-98861e82728b</uuid>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <name>instance-00000016</name>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-140451816</nova:name>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:56:51</nova:creationTime>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <nova:user uuid="27f1a6fb472c4c5fa2286d0fa48dca34">tempest-TransferEncryptedVolumeTest-483286292-project-member</nova:user>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <nova:project uuid="9b39f0e168b54a4b8f976894d21361e6">tempest-TransferEncryptedVolumeTest-483286292</nova:project>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <nova:port uuid="f4095bc2-be91-4b88-adee-fb762fd4a421">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <entry name="serial">46143cbc-0ca2-4cea-bc49-98861e82728b</entry>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <entry name="uuid">46143cbc-0ca2-4cea-bc49-98861e82728b</entry>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/46143cbc-0ca2-4cea-bc49-98861e82728b_disk.config">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-62354df7-8617-4e98-bf68-88376e1103f9">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <serial>62354df7-8617-4e98-bf68-88376e1103f9</serial>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <encryption format="luks">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:        <secret type="passphrase" uuid="086cea04-7265-4854-93a5-503f8d7dc259"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      </encryption>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:c9:02:58"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <target dev="tapf4095bc2-be"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b/console.log" append="off"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:56:52 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:56:52 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:56:52 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:56:52 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.973 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Preparing to wait for external event network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.974 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.974 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.974 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.976 239942 DEBUG nova.virt.libvirt.vif [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-140451816',display_name='tempest-TransferEncryptedVolumeTest-server-140451816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-140451816',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEV56Jk6IDRxyXFlb7xWBOMScnav9Xc5tHSoNY1YUEwOZFWGs8M7XZsrLboufTVEeGeJR0pbnMty3oYNRNpoAOeyFHYNqJJ2N05DBEMeFPzOD6DLoY1LRALz+j5Rp4/1jQ==',key_name='tempest-TransferEncryptedVolumeTest-773774193',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-gi0t0ny2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:56:47Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=46143cbc-0ca2-4cea-bc49-98861e82728b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.976 239942 DEBUG nova.network.os_vif_util [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.977 239942 DEBUG nova.network.os_vif_util [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:02:58,bridge_name='br-int',has_traffic_filtering=True,id=f4095bc2-be91-4b88-adee-fb762fd4a421,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4095bc2-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.978 239942 DEBUG os_vif [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:02:58,bridge_name='br-int',has_traffic_filtering=True,id=f4095bc2-be91-4b88-adee-fb762fd4a421,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4095bc2-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.979 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.980 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.980 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:56:52 np0005603435 podman[266768]: 2026-01-31 04:56:52.981174274 +0000 UTC m=+0.681001628 container remove 6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_jones, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.985 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.985 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4095bc2-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:52 np0005603435 nova_compute[239938]: 2026-01-31 04:56:52.986 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4095bc2-be, col_values=(('external_ids', {'iface-id': 'f4095bc2-be91-4b88-adee-fb762fd4a421', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:02:58', 'vm-uuid': '46143cbc-0ca2-4cea-bc49-98861e82728b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.002 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:53 np0005603435 NetworkManager[49097]: <info>  [1769835413.0043] manager: (tapf4095bc2-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.006 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:56:53 np0005603435 systemd[1]: libpod-conmon-6ec3a1836e35aed5b204a83dc3b1a1eef626ec24a6f9da8c195ad0c811011a13.scope: Deactivated successfully.
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.011 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.012 239942 INFO os_vif [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:02:58,bridge_name='br-int',has_traffic_filtering=True,id=f4095bc2-be91-4b88-adee-fb762fd4a421,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4095bc2-be')#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.085 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.086 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.086 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] No VIF found with MAC fa:16:3e:c9:02:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.087 239942 INFO nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Using config drive#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.122 239942 DEBUG nova.storage.rbd_utils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 46143cbc-0ca2-4cea-bc49-98861e82728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.424 239942 INFO nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Creating config drive at /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b/disk.config#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.430 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp670syt0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:53 np0005603435 podman[266900]: 2026-01-31 04:56:53.451170058 +0000 UTC m=+0.063070340 container create bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:56:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 349 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 40 KiB/s wr, 148 op/s
Jan 30 23:56:53 np0005603435 systemd[1]: Started libpod-conmon-bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376.scope.
Jan 30 23:56:53 np0005603435 podman[266900]: 2026-01-31 04:56:53.421147765 +0000 UTC m=+0.033048087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:56:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:53 np0005603435 podman[266900]: 2026-01-31 04:56:53.557400452 +0000 UTC m=+0.169300764 container init bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cray, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.560 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp670syt0z" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:53 np0005603435 podman[266900]: 2026-01-31 04:56:53.567042987 +0000 UTC m=+0.178943259 container start bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cray, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 30 23:56:53 np0005603435 podman[266900]: 2026-01-31 04:56:53.571433224 +0000 UTC m=+0.183333496 container attach bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:56:53 np0005603435 magical_cray[266919]: 167 167
Jan 30 23:56:53 np0005603435 systemd[1]: libpod-bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376.scope: Deactivated successfully.
Jan 30 23:56:53 np0005603435 podman[266900]: 2026-01-31 04:56:53.575265458 +0000 UTC m=+0.187165730 container died bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.597 239942 DEBUG nova.storage.rbd_utils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] rbd image 46143cbc-0ca2-4cea-bc49-98861e82728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.604 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b/disk.config 46143cbc-0ca2-4cea-bc49-98861e82728b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:53 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ac4d9e024043706341c5185cfd73101999e0edf4ee1b2b1bb76fdf3c8df8c892-merged.mount: Deactivated successfully.
Jan 30 23:56:53 np0005603435 podman[266900]: 2026-01-31 04:56:53.627817051 +0000 UTC m=+0.239717293 container remove bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_cray, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:56:53 np0005603435 systemd[1]: libpod-conmon-bd792bd6a76a4a60ca1eef7db55fe5c5a5b1067fc17f85cdb3d2a136da810376.scope: Deactivated successfully.
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.752 239942 DEBUG oslo_concurrency.processutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b/disk.config 46143cbc-0ca2-4cea-bc49-98861e82728b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.752 239942 INFO nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Deleting local config drive /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b/disk.config because it was imported into RBD.#033[00m
Jan 30 23:56:53 np0005603435 kernel: tapf4095bc2-be: entered promiscuous mode
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.800 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:53 np0005603435 NetworkManager[49097]: <info>  [1769835413.8028] manager: (tapf4095bc2-be): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Jan 30 23:56:53 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:53Z|00214|binding|INFO|Claiming lport f4095bc2-be91-4b88-adee-fb762fd4a421 for this chassis.
Jan 30 23:56:53 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:53Z|00215|binding|INFO|f4095bc2-be91-4b88-adee-fb762fd4a421: Claiming fa:16:3e:c9:02:58 10.100.0.11
Jan 30 23:56:53 np0005603435 podman[266979]: 2026-01-31 04:56:53.805780186 +0000 UTC m=+0.056846639 container create dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.815 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:53 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:53Z|00216|binding|INFO|Setting lport f4095bc2-be91-4b88-adee-fb762fd4a421 ovn-installed in OVS
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.822 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:53 np0005603435 nova_compute[239938]: 2026-01-31 04:56:53.825 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:53 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:53Z|00217|binding|INFO|Setting lport f4095bc2-be91-4b88-adee-fb762fd4a421 up in Southbound
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.831 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:02:58 10.100.0.11'], port_security=['fa:16:3e:c9:02:58 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46143cbc-0ca2-4cea-bc49-98861e82728b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a10d9666-b672-4619-83b7-22dc781b5b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b39f0e168b54a4b8f976894d21361e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31116304-b672-4fa0-88a2-3aca5935fb40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21f14c68-4084-427c-b05e-592b1db029c6, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=f4095bc2-be91-4b88-adee-fb762fd4a421) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.832 156017 INFO neutron.agent.ovn.metadata.agent [-] Port f4095bc2-be91-4b88-adee-fb762fd4a421 in datapath a10d9666-b672-4619-83b7-22dc781b5b5b bound to our chassis#033[00m
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.833 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a10d9666-b672-4619-83b7-22dc781b5b5b#033[00m
Jan 30 23:56:53 np0005603435 systemd-udevd[267005]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:56:53 np0005603435 systemd-machined[208030]: New machine qemu-22-instance-00000016.
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.841 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[54678603-1900-4190-b3ac-5ca75a4d8bed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.842 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa10d9666-b1 in ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:56:53 np0005603435 NetworkManager[49097]: <info>  [1769835413.8460] device (tapf4095bc2-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.845 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa10d9666-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.845 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6ef77d-a3ad-4b64-8544-1158385ed698]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Jan 30 23:56:53 np0005603435 NetworkManager[49097]: <info>  [1769835413.8485] device (tapf4095bc2-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.847 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[34e1a308-4e71-4fcf-8f94-99825ee6587e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 systemd[1]: Started libpod-conmon-dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957.scope.
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.858 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac82974-6f1d-43f5-87c5-c6ed87f0a487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 podman[266979]: 2026-01-31 04:56:53.777740631 +0000 UTC m=+0.028807144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.873 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4d61d3ed-6652-4772-95de-2be8ba4b3676]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e25808384e734cbb519f6523a5b889693602cb976ffd5a0033e9612b61b9f3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e25808384e734cbb519f6523a5b889693602cb976ffd5a0033e9612b61b9f3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e25808384e734cbb519f6523a5b889693602cb976ffd5a0033e9612b61b9f3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:53 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e25808384e734cbb519f6523a5b889693602cb976ffd5a0033e9612b61b9f3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.903 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[255943d0-061b-42b9-b087-2a63c828da58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 podman[266979]: 2026-01-31 04:56:53.904460385 +0000 UTC m=+0.155526848 container init dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.915 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f3357701-ab00-48eb-a712-fe42b9b68884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 systemd-udevd[267010]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:56:53 np0005603435 NetworkManager[49097]: <info>  [1769835413.9180] manager: (tapa10d9666-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Jan 30 23:56:53 np0005603435 podman[266979]: 2026-01-31 04:56:53.918443067 +0000 UTC m=+0.169509490 container start dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:56:53 np0005603435 podman[266979]: 2026-01-31 04:56:53.922334722 +0000 UTC m=+0.173401205 container attach dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.948 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[42ab1311-7bce-40d8-b7d5-63a5f497ec8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.951 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[9655435d-4e92-46f3-8a6c-b18aaa2d460b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 NetworkManager[49097]: <info>  [1769835413.9720] device (tapa10d9666-b0): carrier: link connected
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.974 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a31938-d04c-4d1d-87d3-795c08d6b4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.987 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6e866a90-de8c-4bc5-9bd3-908f52ce74fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa10d9666-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c0:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446837, 'reachable_time': 38855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267045, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:53 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:53.999 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[10af5dde-498c-41ac-88eb-2a9b9032d7dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:c0da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446837, 'tstamp': 446837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267046, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.009 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8f27e938-3fae-4a20-a077-001b51e2e4a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa10d9666-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:c0:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446837, 'reachable_time': 38855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267047, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.028 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f9898614-41bc-4125-8a9c-ef4f77cf3056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.065 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8e139d65-d3ca-4caa-b43e-ef046a951c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.066 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa10d9666-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.066 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.067 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa10d9666-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.119 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:54 np0005603435 NetworkManager[49097]: <info>  [1769835414.1204] manager: (tapa10d9666-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 30 23:56:54 np0005603435 kernel: tapa10d9666-b0: entered promiscuous mode
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.124 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa10d9666-b0, col_values=(('external_ids', {'iface-id': 'b5040674-bbd1-4dc9-b2e1-14712cb60315'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:54 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:54Z|00218|binding|INFO|Releasing lport b5040674-bbd1-4dc9-b2e1-14712cb60315 from this chassis (sb_readonly=0)
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.126 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.138 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.137 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.139 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[be03b626-d9cf-47d8-9eb8-8f5cdc6ceb37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.140 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-a10d9666-b672-4619-83b7-22dc781b5b5b
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/a10d9666-b672-4619-83b7-22dc781b5b5b.pid.haproxy
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID a10d9666-b672-4619-83b7-22dc781b5b5b
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.141 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:54 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:54.142 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'env', 'PROCESS_TAG=haproxy-a10d9666-b672-4619-83b7-22dc781b5b5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a10d9666-b672-4619-83b7-22dc781b5b5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:56:54 np0005603435 sharp_jang[267012]: {
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:    "0": [
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:        {
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "devices": [
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "/dev/loop3"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            ],
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_name": "ceph_lv0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_size": "21470642176",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "name": "ceph_lv0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "tags": {
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cluster_name": "ceph",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.crush_device_class": "",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.encrypted": "0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.objectstore": "bluestore",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osd_id": "0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.type": "block",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.vdo": "0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.with_tpm": "0"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            },
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "type": "block",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "vg_name": "ceph_vg0"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:        }
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:    ],
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:    "1": [
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:        {
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "devices": [
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "/dev/loop4"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            ],
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_name": "ceph_lv1",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_size": "21470642176",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "name": "ceph_lv1",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "tags": {
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cluster_name": "ceph",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.crush_device_class": "",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.encrypted": "0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.objectstore": "bluestore",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osd_id": "1",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.type": "block",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.vdo": "0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.with_tpm": "0"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            },
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "type": "block",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "vg_name": "ceph_vg1"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:        }
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:    ],
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:    "2": [
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:        {
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "devices": [
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "/dev/loop5"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            ],
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_name": "ceph_lv2",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_size": "21470642176",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "name": "ceph_lv2",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "tags": {
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.cluster_name": "ceph",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.crush_device_class": "",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.encrypted": "0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.objectstore": "bluestore",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osd_id": "2",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.type": "block",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.vdo": "0",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:                "ceph.with_tpm": "0"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            },
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "type": "block",
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:            "vg_name": "ceph_vg2"
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:        }
Jan 30 23:56:54 np0005603435 sharp_jang[267012]:    ]
Jan 30 23:56:54 np0005603435 sharp_jang[267012]: }
Jan 30 23:56:54 np0005603435 systemd[1]: libpod-dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957.scope: Deactivated successfully.
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.266 239942 DEBUG nova.compute.manager [req-44674b23-9901-406a-b82f-57edbb51e89d req-5b758c09-c034-43bc-ab88-69b79aa512bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received event network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.267 239942 DEBUG oslo_concurrency.lockutils [req-44674b23-9901-406a-b82f-57edbb51e89d req-5b758c09-c034-43bc-ab88-69b79aa512bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.267 239942 DEBUG oslo_concurrency.lockutils [req-44674b23-9901-406a-b82f-57edbb51e89d req-5b758c09-c034-43bc-ab88-69b79aa512bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.268 239942 DEBUG oslo_concurrency.lockutils [req-44674b23-9901-406a-b82f-57edbb51e89d req-5b758c09-c034-43bc-ab88-69b79aa512bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.268 239942 DEBUG nova.compute.manager [req-44674b23-9901-406a-b82f-57edbb51e89d req-5b758c09-c034-43bc-ab88-69b79aa512bf c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Processing event network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.270 239942 DEBUG nova.network.neutron [req-e18d84d8-a04b-4f31-b730-75bf5f10a718 req-c9b6f8fd-f17d-4746-a98d-3c003779b988 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updated VIF entry in instance network info cache for port f4095bc2-be91-4b88-adee-fb762fd4a421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.271 239942 DEBUG nova.network.neutron [req-e18d84d8-a04b-4f31-b730-75bf5f10a718 req-c9b6f8fd-f17d-4746-a98d-3c003779b988 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updating instance_info_cache with network_info: [{"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.288 239942 DEBUG oslo_concurrency.lockutils [req-e18d84d8-a04b-4f31-b730-75bf5f10a718 req-c9b6f8fd-f17d-4746-a98d-3c003779b988 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:54 np0005603435 podman[267097]: 2026-01-31 04:56:54.29338637 +0000 UTC m=+0.032422902 container died dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:56:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8e25808384e734cbb519f6523a5b889693602cb976ffd5a0033e9612b61b9f3a-merged.mount: Deactivated successfully.
Jan 30 23:56:54 np0005603435 podman[267097]: 2026-01-31 04:56:54.337138899 +0000 UTC m=+0.076175451 container remove dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_jang, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:56:54 np0005603435 systemd[1]: libpod-conmon-dac810a394c88899d55f06a2aaa80cfdb62f16063ba5febb8ecf4cd524a60957.scope: Deactivated successfully.
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.490 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "5c1cf313-39cd-420b-98f1-026da341b273" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.492 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:54 np0005603435 podman[267151]: 2026-01-31 04:56:54.495022013 +0000 UTC m=+0.057000082 container create abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.511 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:56:54 np0005603435 systemd[1]: Started libpod-conmon-abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333.scope.
Jan 30 23:56:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:54 np0005603435 podman[267151]: 2026-01-31 04:56:54.47154873 +0000 UTC m=+0.033526859 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:56:54 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a291955539d100f6270098cadc89cfb53b03bed2746b8d4fb1330492598e0c07/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.571 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.571 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:54 np0005603435 podman[267151]: 2026-01-31 04:56:54.579644559 +0000 UTC m=+0.141622678 container init abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.579 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.580 239942 INFO nova.compute.claims [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:56:54 np0005603435 podman[267151]: 2026-01-31 04:56:54.584274852 +0000 UTC m=+0.146252971 container start abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:56:54 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[267195]: [NOTICE]   (267199) : New worker (267201) forked
Jan 30 23:56:54 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[267195]: [NOTICE]   (267199) : Loading success.
Jan 30 23:56:54 np0005603435 nova_compute[239938]: 2026-01-31 04:56:54.695 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:54 np0005603435 podman[267223]: 2026-01-31 04:56:54.795481499 +0000 UTC m=+0.050989576 container create 105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:56:54 np0005603435 systemd[1]: Started libpod-conmon-105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e.scope.
Jan 30 23:56:54 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:54 np0005603435 podman[267223]: 2026-01-31 04:56:54.767654729 +0000 UTC m=+0.023162886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:56:54 np0005603435 podman[267223]: 2026-01-31 04:56:54.872378716 +0000 UTC m=+0.127886803 container init 105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cerf, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:56:54 np0005603435 podman[267223]: 2026-01-31 04:56:54.883159159 +0000 UTC m=+0.138667236 container start 105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 30 23:56:54 np0005603435 podman[267223]: 2026-01-31 04:56:54.887187848 +0000 UTC m=+0.142695925 container attach 105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:56:54 np0005603435 awesome_cerf[267242]: 167 167
Jan 30 23:56:54 np0005603435 systemd[1]: libpod-105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e.scope: Deactivated successfully.
Jan 30 23:56:54 np0005603435 podman[267223]: 2026-01-31 04:56:54.88849999 +0000 UTC m=+0.144008107 container died 105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cerf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:56:54 np0005603435 systemd[1]: var-lib-containers-storage-overlay-7fe005aa1cbe695b0388f13a2007b0462596e0f5b2d3027af253bebcc488915a-merged.mount: Deactivated successfully.
Jan 30 23:56:54 np0005603435 podman[267223]: 2026-01-31 04:56:54.933318054 +0000 UTC m=+0.188826151 container remove 105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:56:54 np0005603435 systemd[1]: libpod-conmon-105878cef7429faf5f2b0281aea9ceed926623de89cdc180d7b9da60a4cea70e.scope: Deactivated successfully.
Jan 30 23:56:55 np0005603435 podman[267283]: 2026-01-31 04:56:55.10803242 +0000 UTC m=+0.052163835 container create 0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_robinson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:56:55 np0005603435 systemd[1]: Started libpod-conmon-0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92.scope.
Jan 30 23:56:55 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:56:55 np0005603435 podman[267283]: 2026-01-31 04:56:55.079097943 +0000 UTC m=+0.023229378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:56:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d0c792e499bc75e1190fe7928fbc2ca4fea3a44681778651a0fcc6b2263fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d0c792e499bc75e1190fe7928fbc2ca4fea3a44681778651a0fcc6b2263fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d0c792e499bc75e1190fe7928fbc2ca4fea3a44681778651a0fcc6b2263fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:55 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d0c792e499bc75e1190fe7928fbc2ca4fea3a44681778651a0fcc6b2263fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:56:55 np0005603435 podman[267283]: 2026-01-31 04:56:55.205457289 +0000 UTC m=+0.149588704 container init 0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:56:55 np0005603435 podman[267283]: 2026-01-31 04:56:55.212548252 +0000 UTC m=+0.156679647 container start 0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_robinson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 30 23:56:55 np0005603435 podman[267283]: 2026-01-31 04:56:55.215972395 +0000 UTC m=+0.160103790 container attach 0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_robinson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.220 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:56:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525579210' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.260 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.266 239942 DEBUG nova.compute.provider_tree [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.282 239942 DEBUG nova.scheduler.client.report [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.305 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.305 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.378 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.379 239942 DEBUG nova.network.neutron [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.403 239942 INFO nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.418 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.464 239942 INFO nova.virt.block_device [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Booting with volume 45fe01a6-1d82-456a-b502-568386cb1d48 at /dev/vda#033[00m
Jan 30 23:56:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 349 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 33 KiB/s wr, 123 op/s
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.553 239942 DEBUG nova.policy [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e10f13b98624406985dec6a5dcc391c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.575 239942 DEBUG os_brick.utils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.576 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.585 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.585 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[504488aa-05f0-4685-9087-6dcc95effb83]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.586 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.590 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.591 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[95f8e413-2bf7-46ca-bc2f-8965d86eeae6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.592 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.599 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.599 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b0414b-1deb-4518-a4ea-485bdd285d61]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.600 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[048e4d0f-b184-455e-9ee2-2a3c044e7b3b]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.601 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.620 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.623 239942 DEBUG os_brick.initiator.connectors.lightos [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.624 239942 DEBUG os_brick.initiator.connectors.lightos [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.624 239942 DEBUG os_brick.initiator.connectors.lightos [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.624 239942 DEBUG os_brick.utils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] <== get_connector_properties: return (49ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:56:55 np0005603435 nova_compute[239938]: 2026-01-31 04:56:55.625 239942 DEBUG nova.virt.block_device [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updating existing volume attachment record: 35d2177d-6620-4d23-ad5a-f5f9e3e428ad _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:56:55 np0005603435 lvm[267385]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:56:55 np0005603435 lvm[267385]: VG ceph_vg0 finished
Jan 30 23:56:55 np0005603435 lvm[267387]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:56:55 np0005603435 lvm[267387]: VG ceph_vg1 finished
Jan 30 23:56:55 np0005603435 lvm[267388]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:56:55 np0005603435 lvm[267388]: VG ceph_vg2 finished
Jan 30 23:56:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:55.921 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:55.922 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:55.923 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:55 np0005603435 objective_robinson[267300]: {}
Jan 30 23:56:55 np0005603435 systemd[1]: libpod-0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92.scope: Deactivated successfully.
Jan 30 23:56:55 np0005603435 systemd[1]: libpod-0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92.scope: Consumed 1.008s CPU time.
Jan 30 23:56:56 np0005603435 podman[267391]: 2026-01-31 04:56:56.055718257 +0000 UTC m=+0.039978637 container died 0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 30 23:56:56 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5d6d0c792e499bc75e1190fe7928fbc2ca4fea3a44681778651a0fcc6b2263fa-merged.mount: Deactivated successfully.
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.105 239942 DEBUG nova.network.neutron [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Successfully created port: 3ee2f2be-ab08-486b-9003-3c2f0b450b03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:56:56 np0005603435 podman[267391]: 2026-01-31 04:56:56.123277186 +0000 UTC m=+0.107537536 container remove 0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:56:56 np0005603435 systemd[1]: libpod-conmon-0db22f91f01a8b5493a4ebc88644e8aadde728e71223233560fe1684ab8e8e92.scope: Deactivated successfully.
Jan 30 23:56:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:56:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:56:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:56:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.346 239942 DEBUG nova.compute.manager [req-4dd3f144-5f76-4be1-ae9f-0efc2875961c req-77f12192-5584-4256-89a8-e7cbaf29db40 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received event network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.348 239942 DEBUG oslo_concurrency.lockutils [req-4dd3f144-5f76-4be1-ae9f-0efc2875961c req-77f12192-5584-4256-89a8-e7cbaf29db40 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.348 239942 DEBUG oslo_concurrency.lockutils [req-4dd3f144-5f76-4be1-ae9f-0efc2875961c req-77f12192-5584-4256-89a8-e7cbaf29db40 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.348 239942 DEBUG oslo_concurrency.lockutils [req-4dd3f144-5f76-4be1-ae9f-0efc2875961c req-77f12192-5584-4256-89a8-e7cbaf29db40 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.349 239942 DEBUG nova.compute.manager [req-4dd3f144-5f76-4be1-ae9f-0efc2875961c req-77f12192-5584-4256-89a8-e7cbaf29db40 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] No waiting events found dispatching network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.349 239942 WARNING nova.compute.manager [req-4dd3f144-5f76-4be1-ae9f-0efc2875961c req-77f12192-5584-4256-89a8-e7cbaf29db40 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received unexpected event network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 for instance with vm_state building and task_state spawning.#033[00m
Jan 30 23:56:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:56:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790846689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.461 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835416.461116, 46143cbc-0ca2-4cea-bc49-98861e82728b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.462 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] VM Started (Lifecycle Event)#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.464 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.468 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.471 239942 INFO nova.virt.libvirt.driver [-] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Instance spawned successfully.#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.472 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.483 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.494 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.499 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.499 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.500 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.500 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.500 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.500 239942 DEBUG nova.virt.libvirt.driver [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.530 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.530 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835416.4639108, 46143cbc-0ca2-4cea-bc49-98861e82728b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.530 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.559 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.563 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835416.4671154, 46143cbc-0ca2-4cea-bc49-98861e82728b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.563 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.571 239942 INFO nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Took 7.29 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.572 239942 DEBUG nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.581 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.585 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.613 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.655 239942 INFO nova.compute.manager [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Took 9.75 seconds to build instance.#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.684 239942 DEBUG oslo_concurrency.lockutils [None req-01eab4ae-27d3-41d3-b5df-c906567c4c99 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.746 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.748 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.749 239942 INFO nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Creating image(s)#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.749 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.750 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Ensure instance console log exists: /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.750 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.750 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.751 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.922 239942 DEBUG nova.network.neutron [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Successfully updated port: 3ee2f2be-ab08-486b-9003-3c2f0b450b03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.941 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.942 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquired lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:56 np0005603435 nova_compute[239938]: 2026-01-31 04:56:56.942 239942 DEBUG nova.network.neutron [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:56:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:56:56 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.061 239942 DEBUG nova.compute.manager [req-94a96f87-f970-4d08-a2d6-fc2061657902 req-8ea1497e-0602-4c4b-a99f-5ac0e25be306 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received event network-changed-3ee2f2be-ab08-486b-9003-3c2f0b450b03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.061 239942 DEBUG nova.compute.manager [req-94a96f87-f970-4d08-a2d6-fc2061657902 req-8ea1497e-0602-4c4b-a99f-5ac0e25be306 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Refreshing instance network info cache due to event network-changed-3ee2f2be-ab08-486b-9003-3c2f0b450b03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.062 239942 DEBUG oslo_concurrency.lockutils [req-94a96f87-f970-4d08-a2d6-fc2061657902 req-8ea1497e-0602-4c4b-a99f-5ac0e25be306 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.081 239942 DEBUG nova.network.neutron [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:56:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 55 KiB/s wr, 204 op/s
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.830 239942 DEBUG nova.network.neutron [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updating instance_info_cache with network_info: [{"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.856 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Releasing lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.857 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Instance network_info: |[{"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.857 239942 DEBUG oslo_concurrency.lockutils [req-94a96f87-f970-4d08-a2d6-fc2061657902 req-8ea1497e-0602-4c4b-a99f-5ac0e25be306 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.857 239942 DEBUG nova.network.neutron [req-94a96f87-f970-4d08-a2d6-fc2061657902 req-8ea1497e-0602-4c4b-a99f-5ac0e25be306 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Refreshing network info cache for port 3ee2f2be-ab08-486b-9003-3c2f0b450b03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.861 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Start _get_guest_xml network_info=[{"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '35d2177d-6620-4d23-ad5a-f5f9e3e428ad', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-45fe01a6-1d82-456a-b502-568386cb1d48', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '45fe01a6-1d82-456a-b502-568386cb1d48', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5c1cf313-39cd-420b-98f1-026da341b273', 'attached_at': '', 'detached_at': '', 'volume_id': '45fe01a6-1d82-456a-b502-568386cb1d48', 'serial': '45fe01a6-1d82-456a-b502-568386cb1d48'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.867 239942 WARNING nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.879 239942 DEBUG nova.virt.libvirt.host [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.880 239942 DEBUG nova.virt.libvirt.host [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.885 239942 DEBUG nova.virt.libvirt.host [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.885 239942 DEBUG nova.virt.libvirt.host [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.886 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.886 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.886 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.887 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.887 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.887 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.887 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.887 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.888 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.888 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.888 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.888 239942 DEBUG nova.virt.hardware [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.912 239942 DEBUG nova.storage.rbd_utils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 5c1cf313-39cd-420b-98f1-026da341b273_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:57 np0005603435 nova_compute[239938]: 2026-01-31 04:56:57.916 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Jan 30 23:56:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Jan 30 23:56:57 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.003 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:56:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Jan 30 23:56:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Jan 30 23:56:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Jan 30 23:56:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:56:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384329001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.484 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.512 239942 DEBUG nova.virt.libvirt.vif [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:56:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-863134085',display_name='tempest-TestVolumeBootPattern-server-863134085',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-863134085',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-9txe0qqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:56:55Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=5c1cf313-39cd-420b-98f1-026da341b273,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.513 239942 DEBUG nova.network.os_vif_util [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.513 239942 DEBUG nova.network.os_vif_util [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:18:ff,bridge_name='br-int',has_traffic_filtering=True,id=3ee2f2be-ab08-486b-9003-3c2f0b450b03,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee2f2be-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.515 239942 DEBUG nova.objects.instance [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c1cf313-39cd-420b-98f1-026da341b273 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.528 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <uuid>5c1cf313-39cd-420b-98f1-026da341b273</uuid>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <name>instance-00000017</name>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestVolumeBootPattern-server-863134085</nova:name>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:56:57</nova:creationTime>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <nova:user uuid="e10f13b98624406985dec6a5dcc391c7">tempest-TestVolumeBootPattern-1782423025-project-member</nova:user>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <nova:project uuid="e332802dd6cf49c59f8ed38e70addb0e">tempest-TestVolumeBootPattern-1782423025</nova:project>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <nova:port uuid="3ee2f2be-ab08-486b-9003-3c2f0b450b03">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <entry name="serial">5c1cf313-39cd-420b-98f1-026da341b273</entry>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <entry name="uuid">5c1cf313-39cd-420b-98f1-026da341b273</entry>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/5c1cf313-39cd-420b-98f1-026da341b273_disk.config">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-45fe01a6-1d82-456a-b502-568386cb1d48">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <serial>45fe01a6-1d82-456a-b502-568386cb1d48</serial>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:38:18:ff"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <target dev="tap3ee2f2be-ab"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273/console.log" append="off"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:56:58 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:56:58 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:56:58 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:56:58 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.529 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Preparing to wait for external event network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.529 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "5c1cf313-39cd-420b-98f1-026da341b273-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.529 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.530 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.530 239942 DEBUG nova.virt.libvirt.vif [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:56:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-863134085',display_name='tempest-TestVolumeBootPattern-server-863134085',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-863134085',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-9txe0qqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:56:55Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=5c1cf313-39cd-420b-98f1-026da341b273,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.531 239942 DEBUG nova.network.os_vif_util [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.531 239942 DEBUG nova.network.os_vif_util [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:18:ff,bridge_name='br-int',has_traffic_filtering=True,id=3ee2f2be-ab08-486b-9003-3c2f0b450b03,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee2f2be-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.532 239942 DEBUG os_vif [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:18:ff,bridge_name='br-int',has_traffic_filtering=True,id=3ee2f2be-ab08-486b-9003-3c2f0b450b03,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee2f2be-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.532 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.532 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.533 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.536 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.536 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ee2f2be-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.536 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ee2f2be-ab, col_values=(('external_ids', {'iface-id': '3ee2f2be-ab08-486b-9003-3c2f0b450b03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:18:ff', 'vm-uuid': '5c1cf313-39cd-420b-98f1-026da341b273'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.559 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:58 np0005603435 NetworkManager[49097]: <info>  [1769835418.5605] manager: (tap3ee2f2be-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.562 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.568 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.569 239942 INFO os_vif [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:18:ff,bridge_name='br-int',has_traffic_filtering=True,id=3ee2f2be-ab08-486b-9003-3c2f0b450b03,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee2f2be-ab')#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.641 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.641 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.642 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No VIF found with MAC fa:16:3e:38:18:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.642 239942 INFO nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Using config drive#033[00m
Jan 30 23:56:58 np0005603435 nova_compute[239938]: 2026-01-31 04:56:58.675 239942 DEBUG nova.storage.rbd_utils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 5c1cf313-39cd-420b-98f1-026da341b273_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.246 239942 DEBUG nova.network.neutron [req-94a96f87-f970-4d08-a2d6-fc2061657902 req-8ea1497e-0602-4c4b-a99f-5ac0e25be306 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updated VIF entry in instance network info cache for port 3ee2f2be-ab08-486b-9003-3c2f0b450b03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.247 239942 DEBUG nova.network.neutron [req-94a96f87-f970-4d08-a2d6-fc2061657902 req-8ea1497e-0602-4c4b-a99f-5ac0e25be306 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updating instance_info_cache with network_info: [{"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.271 239942 DEBUG oslo_concurrency.lockutils [req-94a96f87-f970-4d08-a2d6-fc2061657902 req-8ea1497e-0602-4c4b-a99f-5ac0e25be306 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:56:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 698 KiB/s rd, 29 KiB/s wr, 116 op/s
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.500 239942 INFO nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Creating config drive at /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273/disk.config#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.506 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphno9bqv4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.633 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphno9bqv4" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.671 239942 DEBUG nova.storage.rbd_utils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image 5c1cf313-39cd-420b-98f1-026da341b273_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.677 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273/disk.config 5c1cf313-39cd-420b-98f1-026da341b273_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.789 239942 DEBUG nova.compute.manager [req-db54d122-3024-4922-aeaa-85529bc5cba8 req-515884ad-c1cd-4dc3-8b3f-938f5f5b499e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received event network-changed-f4095bc2-be91-4b88-adee-fb762fd4a421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.791 239942 DEBUG nova.compute.manager [req-db54d122-3024-4922-aeaa-85529bc5cba8 req-515884ad-c1cd-4dc3-8b3f-938f5f5b499e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Refreshing instance network info cache due to event network-changed-f4095bc2-be91-4b88-adee-fb762fd4a421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.792 239942 DEBUG oslo_concurrency.lockutils [req-db54d122-3024-4922-aeaa-85529bc5cba8 req-515884ad-c1cd-4dc3-8b3f-938f5f5b499e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.792 239942 DEBUG oslo_concurrency.lockutils [req-db54d122-3024-4922-aeaa-85529bc5cba8 req-515884ad-c1cd-4dc3-8b3f-938f5f5b499e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.793 239942 DEBUG nova.network.neutron [req-db54d122-3024-4922-aeaa-85529bc5cba8 req-515884ad-c1cd-4dc3-8b3f-938f5f5b499e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Refreshing network info cache for port f4095bc2-be91-4b88-adee-fb762fd4a421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.844 239942 DEBUG oslo_concurrency.processutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273/disk.config 5c1cf313-39cd-420b-98f1-026da341b273_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.844 239942 INFO nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Deleting local config drive /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273/disk.config because it was imported into RBD.#033[00m
Jan 30 23:56:59 np0005603435 virtqemud[240256]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 30 23:56:59 np0005603435 virtqemud[240256]: hostname: compute-0
Jan 30 23:56:59 np0005603435 virtqemud[240256]: End of file while reading data: Input/output error
Jan 30 23:56:59 np0005603435 kernel: tap3ee2f2be-ab: entered promiscuous mode
Jan 30 23:56:59 np0005603435 NetworkManager[49097]: <info>  [1769835419.8881] manager: (tap3ee2f2be-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Jan 30 23:56:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:59Z|00219|binding|INFO|Claiming lport 3ee2f2be-ab08-486b-9003-3c2f0b450b03 for this chassis.
Jan 30 23:56:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:59Z|00220|binding|INFO|3ee2f2be-ab08-486b-9003-3c2f0b450b03: Claiming fa:16:3e:38:18:ff 10.100.0.8
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.891 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:59Z|00221|binding|INFO|Setting lport 3ee2f2be-ab08-486b-9003-3c2f0b450b03 ovn-installed in OVS
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.902 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:18:ff 10.100.0.8'], port_security=['fa:16:3e:38:18:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5c1cf313-39cd-420b-98f1-026da341b273', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5925722f-3c3e-42bd-9802-ef7105d62a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=3ee2f2be-ab08-486b-9003-3c2f0b450b03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:56:59 np0005603435 ovn_controller[145670]: 2026-01-31T04:56:59Z|00222|binding|INFO|Setting lport 3ee2f2be-ab08-486b-9003-3c2f0b450b03 up in Southbound
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.904 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee2f2be-ab08-486b-9003-3c2f0b450b03 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 bound to our chassis#033[00m
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.907 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:56:59 np0005603435 nova_compute[239938]: 2026-01-31 04:56:59.908 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.916 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b000d9ed-63fa-40f0-95ce-5c29623876a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.917 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b0cf2db-21 in ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.919 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b0cf2db-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.919 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4facf846-266a-462c-a03b-69eb354553fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.920 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[da740ffa-d014-4c18-bffb-d4ab4f075766]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.933 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[6475a59e-f7e1-4f8f-8dfa-cfd8b881cb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:59 np0005603435 systemd-udevd[267553]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:56:59 np0005603435 systemd-machined[208030]: New machine qemu-23-instance-00000017.
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.946 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[904b4312-3736-4a46-9ec5-ae3834e3b02c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:59 np0005603435 NetworkManager[49097]: <info>  [1769835419.9491] device (tap3ee2f2be-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:56:59 np0005603435 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Jan 30 23:56:59 np0005603435 NetworkManager[49097]: <info>  [1769835419.9500] device (tap3ee2f2be-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:56:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:56:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/757682751' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:56:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:56:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/757682751' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.989 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[8331c099-8f1a-47a8-9627-a2d499fc9894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:56:59 np0005603435 NetworkManager[49097]: <info>  [1769835419.9970] manager: (tap5b0cf2db-20): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Jan 30 23:56:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:56:59.998 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8fba036b-079c-4426-bc3f-a2109ba5dbe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.030 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[5375bbc3-1744-4013-aa33-a460df0f7ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.034 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d14589-d018-4b69-bb0a-f024d5000eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 NetworkManager[49097]: <info>  [1769835420.0565] device (tap5b0cf2db-20): carrier: link connected
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.063 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[1d16bc16-1a7a-4d05-8a6c-5911517291c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.083 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c1178ba6-3ee5-46f2-9044-79b435e8dc09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447445, 'reachable_time': 41879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267584, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.102 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4f06efe8-55e2-424f-9df9-4d2dbc85ef6d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:f719'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447445, 'tstamp': 447445}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267585, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.121 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[fac6ede0-5419-41bd-91fb-e7a2a9d87838]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447445, 'reachable_time': 41879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267586, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.152 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[71d6d0fa-c028-4c83-9f47-d634532f04e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.211 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d4cdf6-480b-4c89-85b2-cafd2214f61b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.212 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.213 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.213 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.215 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:00 np0005603435 kernel: tap5b0cf2db-20: entered promiscuous mode
Jan 30 23:57:00 np0005603435 NetworkManager[49097]: <info>  [1769835420.2166] manager: (tap5b0cf2db-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.218 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.221 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.224 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.225 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.226 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:00 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:00Z|00223|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.228 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.229 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfc6f6d-5f83-448c-85f7-eadec8605874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.230 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.pid.haproxy
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:57:00 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:00.231 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'env', 'PROCESS_TAG=haproxy-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.235 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.373 239942 DEBUG nova.compute.manager [req-cf434185-bdf5-48d1-92d8-97768db221a0 req-f8760920-ea6b-46b6-9375-81bd3e78a806 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received event network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.373 239942 DEBUG oslo_concurrency.lockutils [req-cf434185-bdf5-48d1-92d8-97768db221a0 req-f8760920-ea6b-46b6-9375-81bd3e78a806 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "5c1cf313-39cd-420b-98f1-026da341b273-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.373 239942 DEBUG oslo_concurrency.lockutils [req-cf434185-bdf5-48d1-92d8-97768db221a0 req-f8760920-ea6b-46b6-9375-81bd3e78a806 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.373 239942 DEBUG oslo_concurrency.lockutils [req-cf434185-bdf5-48d1-92d8-97768db221a0 req-f8760920-ea6b-46b6-9375-81bd3e78a806 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.374 239942 DEBUG nova.compute.manager [req-cf434185-bdf5-48d1-92d8-97768db221a0 req-f8760920-ea6b-46b6-9375-81bd3e78a806 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Processing event network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.433 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.434 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835420.4331214, 5c1cf313-39cd-420b-98f1-026da341b273 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.434 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] VM Started (Lifecycle Event)#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.439 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.442 239942 INFO nova.virt.libvirt.driver [-] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Instance spawned successfully.#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.443 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.455 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.462 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.468 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.469 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.469 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.470 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.471 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.471 239942 DEBUG nova.virt.libvirt.driver [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.487 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.488 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835420.433333, 5c1cf313-39cd-420b-98f1-026da341b273 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.489 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.507 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.511 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835420.4373028, 5c1cf313-39cd-420b-98f1-026da341b273 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.511 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.539 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.544 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.550 239942 INFO nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Took 3.80 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.551 239942 DEBUG nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.569 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.609 239942 INFO nova.compute.manager [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Took 6.05 seconds to build instance.#033[00m
Jan 30 23:57:00 np0005603435 podman[267660]: 2026-01-31 04:57:00.61831213 +0000 UTC m=+0.048399953 container create 9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 30 23:57:00 np0005603435 nova_compute[239938]: 2026-01-31 04:57:00.631 239942 DEBUG oslo_concurrency.lockutils [None req-9e7ebe9d-63dc-4003-8c2c-8810fd3ea73a e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:00 np0005603435 systemd[1]: Started libpod-conmon-9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac.scope.
Jan 30 23:57:00 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:57:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e232f73622aae3106fc8f0213a84e0d9c927b0df867c8265b435ace9039bcb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:00 np0005603435 podman[267660]: 2026-01-31 04:57:00.587279052 +0000 UTC m=+0.017366895 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:57:00 np0005603435 podman[267660]: 2026-01-31 04:57:00.688467443 +0000 UTC m=+0.118555266 container init 9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 30 23:57:00 np0005603435 podman[267660]: 2026-01-31 04:57:00.695939005 +0000 UTC m=+0.126026848 container start 9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 30 23:57:00 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[267673]: [NOTICE]   (267679) : New worker (267681) forked
Jan 30 23:57:00 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[267673]: [NOTICE]   (267679) : Loading success.
Jan 30 23:57:01 np0005603435 nova_compute[239938]: 2026-01-31 04:57:01.377 239942 DEBUG nova.network.neutron [req-db54d122-3024-4922-aeaa-85529bc5cba8 req-515884ad-c1cd-4dc3-8b3f-938f5f5b499e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updated VIF entry in instance network info cache for port f4095bc2-be91-4b88-adee-fb762fd4a421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:57:01 np0005603435 nova_compute[239938]: 2026-01-31 04:57:01.378 239942 DEBUG nova.network.neutron [req-db54d122-3024-4922-aeaa-85529bc5cba8 req-515884ad-c1cd-4dc3-8b3f-938f5f5b499e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updating instance_info_cache with network_info: [{"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:57:01 np0005603435 nova_compute[239938]: 2026-01-31 04:57:01.401 239942 DEBUG oslo_concurrency.lockutils [req-db54d122-3024-4922-aeaa-85529bc5cba8 req-515884ad-c1cd-4dc3-8b3f-938f5f5b499e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:57:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 24 KiB/s wr, 118 op/s
Jan 30 23:57:02 np0005603435 podman[267690]: 2026-01-31 04:57:02.095455144 +0000 UTC m=+0.063101031 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 30 23:57:02 np0005603435 podman[267691]: 2026-01-31 04:57:02.137671415 +0000 UTC m=+0.105674511 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.500 239942 DEBUG nova.compute.manager [req-59c1bd6f-79ed-4708-9150-86d8a569dbae req-69c6c439-76f2-4dc8-bd81-73c4a9aa13ca c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received event network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.500 239942 DEBUG oslo_concurrency.lockutils [req-59c1bd6f-79ed-4708-9150-86d8a569dbae req-69c6c439-76f2-4dc8-bd81-73c4a9aa13ca c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "5c1cf313-39cd-420b-98f1-026da341b273-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.501 239942 DEBUG oslo_concurrency.lockutils [req-59c1bd6f-79ed-4708-9150-86d8a569dbae req-69c6c439-76f2-4dc8-bd81-73c4a9aa13ca c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.501 239942 DEBUG oslo_concurrency.lockutils [req-59c1bd6f-79ed-4708-9150-86d8a569dbae req-69c6c439-76f2-4dc8-bd81-73c4a9aa13ca c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.501 239942 DEBUG nova.compute.manager [req-59c1bd6f-79ed-4708-9150-86d8a569dbae req-69c6c439-76f2-4dc8-bd81-73c4a9aa13ca c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] No waiting events found dispatching network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.502 239942 WARNING nova.compute.manager [req-59c1bd6f-79ed-4708-9150-86d8a569dbae req-69c6c439-76f2-4dc8-bd81-73c4a9aa13ca c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received unexpected event network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:57:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Jan 30 23:57:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Jan 30 23:57:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.756 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835407.754214, 2d5c8c52-0781-43ca-9fd1-58e205d20e4b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.756 239942 INFO nova.compute.manager [-] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:57:02 np0005603435 nova_compute[239938]: 2026-01-31 04:57:02.784 239942 DEBUG nova.compute.manager [None req-c627a7c1-0d38-43bb-ad12-1f1d9d944f0a - - - - - -] [instance: 2d5c8c52-0781-43ca-9fd1-58e205d20e4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 29 KiB/s wr, 349 op/s
Jan 30 23:57:03 np0005603435 nova_compute[239938]: 2026-01-31 04:57:03.560 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:04 np0005603435 nova_compute[239938]: 2026-01-31 04:57:04.775 239942 DEBUG nova.compute.manager [req-0028cc39-e900-4dd3-88dd-c632691fa6d3 req-e8494f08-869a-4935-aec6-a453a9c3a5b1 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received event network-changed-3ee2f2be-ab08-486b-9003-3c2f0b450b03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:04 np0005603435 nova_compute[239938]: 2026-01-31 04:57:04.776 239942 DEBUG nova.compute.manager [req-0028cc39-e900-4dd3-88dd-c632691fa6d3 req-e8494f08-869a-4935-aec6-a453a9c3a5b1 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Refreshing instance network info cache due to event network-changed-3ee2f2be-ab08-486b-9003-3c2f0b450b03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:57:04 np0005603435 nova_compute[239938]: 2026-01-31 04:57:04.776 239942 DEBUG oslo_concurrency.lockutils [req-0028cc39-e900-4dd3-88dd-c632691fa6d3 req-e8494f08-869a-4935-aec6-a453a9c3a5b1 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:57:04 np0005603435 nova_compute[239938]: 2026-01-31 04:57:04.777 239942 DEBUG oslo_concurrency.lockutils [req-0028cc39-e900-4dd3-88dd-c632691fa6d3 req-e8494f08-869a-4935-aec6-a453a9c3a5b1 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:57:04 np0005603435 nova_compute[239938]: 2026-01-31 04:57:04.777 239942 DEBUG nova.network.neutron [req-0028cc39-e900-4dd3-88dd-c632691fa6d3 req-e8494f08-869a-4935-aec6-a453a9c3a5b1 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Refreshing network info cache for port 3ee2f2be-ab08-486b-9003-3c2f0b450b03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:57:05 np0005603435 nova_compute[239938]: 2026-01-31 04:57:05.261 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 25 KiB/s wr, 310 op/s
Jan 30 23:57:06 np0005603435 nova_compute[239938]: 2026-01-31 04:57:06.381 239942 DEBUG nova.network.neutron [req-0028cc39-e900-4dd3-88dd-c632691fa6d3 req-e8494f08-869a-4935-aec6-a453a9c3a5b1 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updated VIF entry in instance network info cache for port 3ee2f2be-ab08-486b-9003-3c2f0b450b03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:57:06 np0005603435 nova_compute[239938]: 2026-01-31 04:57:06.382 239942 DEBUG nova.network.neutron [req-0028cc39-e900-4dd3-88dd-c632691fa6d3 req-e8494f08-869a-4935-aec6-a453a9c3a5b1 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updating instance_info_cache with network_info: [{"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:57:06 np0005603435 nova_compute[239938]: 2026-01-31 04:57:06.411 239942 DEBUG oslo_concurrency.lockutils [req-0028cc39-e900-4dd3-88dd-c632691fa6d3 req-e8494f08-869a-4935-aec6-a453a9c3a5b1 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:57:06
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', '.mgr', 'backups', 'images', 'default.rgw.control', 'default.rgw.log']
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:57:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:57:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 23 KiB/s wr, 301 op/s
Jan 30 23:57:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Jan 30 23:57:07 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:07Z|00046|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.11
Jan 30 23:57:07 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:07Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c9:02:58 10.100.0.11
Jan 30 23:57:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Jan 30 23:57:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Jan 30 23:57:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:57:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:57:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:57:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:57:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:57:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:57:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:57:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:57:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:57:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:57:08 np0005603435 nova_compute[239938]: 2026-01-31 04:57:08.593 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 25 KiB/s wr, 324 op/s
Jan 30 23:57:10 np0005603435 nova_compute[239938]: 2026-01-31 04:57:10.262 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 111 op/s
Jan 30 23:57:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Jan 30 23:57:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Jan 30 23:57:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Jan 30 23:57:11 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:11Z|00048|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.11
Jan 30 23:57:11 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:11Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c9:02:58 10.100.0.11
Jan 30 23:57:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/573297782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/573297782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:12Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:02:58 10.100.0.11
Jan 30 23:57:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:12Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:02:58 10.100.0.11
Jan 30 23:57:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:12Z|00052|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.8
Jan 30 23:57:12 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:12Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:38:18:ff 10.100.0.8
Jan 30 23:57:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 35 KiB/s wr, 242 op/s
Jan 30 23:57:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Jan 30 23:57:13 np0005603435 nova_compute[239938]: 2026-01-31 04:57:13.657 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Jan 30 23:57:13 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Jan 30 23:57:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584168000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584168000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1725404239' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1725404239' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:15 np0005603435 nova_compute[239938]: 2026-01-31 04:57:15.263 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 350 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 39 KiB/s wr, 241 op/s
Jan 30 23:57:16 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:16Z|00054|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.8
Jan 30 23:57:16 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:16Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:38:18:ff 10.100.0.8
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.1482795809227686e-05 of space, bias 1.0, pg target 0.0034448387427683056 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003659968715537811 of space, bias 1.0, pg target 1.0979906146613432 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 7.540373750331308e-07 of space, bias 1.0, pg target 0.0002254571751349061 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671955694621015 of space, bias 1.0, pg target 0.19949147526916836 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.72729565515535e-07 of space, bias 4.0, pg target 0.0009241845603565798 quantized to 16 (current 16)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 30 23:57:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 75 KiB/s wr, 304 op/s
Jan 30 23:57:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:17Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:18:ff 10.100.0.8
Jan 30 23:57:17 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:17Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:18:ff 10.100.0.8
Jan 30 23:57:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Jan 30 23:57:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Jan 30 23:57:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Jan 30 23:57:18 np0005603435 nova_compute[239938]: 2026-01-31 04:57:18.693 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 494 KiB/s rd, 46 KiB/s wr, 182 op/s
Jan 30 23:57:19 np0005603435 nova_compute[239938]: 2026-01-31 04:57:19.913 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:20 np0005603435 nova_compute[239938]: 2026-01-31 04:57:20.266 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:20 np0005603435 nova_compute[239938]: 2026-01-31 04:57:20.889 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 45 KiB/s wr, 119 op/s
Jan 30 23:57:22 np0005603435 nova_compute[239938]: 2026-01-31 04:57:22.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:22 np0005603435 nova_compute[239938]: 2026-01-31 04:57:22.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 44 KiB/s wr, 99 op/s
Jan 30 23:57:23 np0005603435 nova_compute[239938]: 2026-01-31 04:57:23.697 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Jan 30 23:57:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Jan 30 23:57:23 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Jan 30 23:57:23 np0005603435 nova_compute[239938]: 2026-01-31 04:57:23.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:23 np0005603435 nova_compute[239938]: 2026-01-31 04:57:23.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:57:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Jan 30 23:57:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Jan 30 23:57:24 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Jan 30 23:57:24 np0005603435 nova_compute[239938]: 2026-01-31 04:57:24.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:24 np0005603435 nova_compute[239938]: 2026-01-31 04:57:24.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:24 np0005603435 nova_compute[239938]: 2026-01-31 04:57:24.917 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:24 np0005603435 nova_compute[239938]: 2026-01-31 04:57:24.917 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:24 np0005603435 nova_compute[239938]: 2026-01-31 04:57:24.918 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:24 np0005603435 nova_compute[239938]: 2026-01-31 04:57:24.918 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:57:24 np0005603435 nova_compute[239938]: 2026-01-31 04:57:24.918 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.269 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:57:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942124510' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.464 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 19 KiB/s wr, 30 op/s
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.546 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.547 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.668 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.668 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.840 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.841 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4036MB free_disk=59.9875906649977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.841 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:25 np0005603435 nova_compute[239938]: 2026-01-31 04:57:25.841 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2403913347' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2403913347' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.142 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 46143cbc-0ca2-4cea-bc49-98861e82728b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.143 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 5c1cf313-39cd-420b-98f1-026da341b273 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.143 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.144 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.275 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:57:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510630216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.830 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.837 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.856 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.884 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:57:26 np0005603435 nova_compute[239938]: 2026-01-31 04:57:26.885 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/75489020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/75489020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 29 KiB/s wr, 34 op/s
Jan 30 23:57:27 np0005603435 nova_compute[239938]: 2026-01-31 04:57:27.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:27 np0005603435 nova_compute[239938]: 2026-01-31 04:57:27.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:57:27 np0005603435 nova_compute[239938]: 2026-01-31 04:57:27.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:57:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Jan 30 23:57:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Jan 30 23:57:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Jan 30 23:57:28 np0005603435 nova_compute[239938]: 2026-01-31 04:57:28.380 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:57:28 np0005603435 nova_compute[239938]: 2026-01-31 04:57:28.380 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:57:28 np0005603435 nova_compute[239938]: 2026-01-31 04:57:28.380 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 30 23:57:28 np0005603435 nova_compute[239938]: 2026-01-31 04:57:28.380 239942 DEBUG nova.objects.instance [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 46143cbc-0ca2-4cea-bc49-98861e82728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:57:28 np0005603435 nova_compute[239938]: 2026-01-31 04:57:28.749 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:28.984 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:57:28 np0005603435 nova_compute[239938]: 2026-01-31 04:57:28.985 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:28.986 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:57:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 26 KiB/s wr, 41 op/s
Jan 30 23:57:29 np0005603435 nova_compute[239938]: 2026-01-31 04:57:29.548 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updating instance_info_cache with network_info: [{"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:57:29 np0005603435 nova_compute[239938]: 2026-01-31 04:57:29.564 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-46143cbc-0ca2-4cea-bc49-98861e82728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:57:29 np0005603435 nova_compute[239938]: 2026-01-31 04:57:29.564 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 30 23:57:29 np0005603435 nova_compute[239938]: 2026-01-31 04:57:29.565 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:57:30 np0005603435 nova_compute[239938]: 2026-01-31 04:57:30.271 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 20 KiB/s wr, 65 op/s
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.088 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "46143cbc-0ca2-4cea-bc49-98861e82728b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.089 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.089 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.089 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.090 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.091 239942 INFO nova.compute.manager [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Terminating instance#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.092 239942 DEBUG nova.compute.manager [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:57:32 np0005603435 kernel: tapf4095bc2-be (unregistering): left promiscuous mode
Jan 30 23:57:32 np0005603435 NetworkManager[49097]: <info>  [1769835452.1410] device (tapf4095bc2-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:57:32 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:32Z|00224|binding|INFO|Releasing lport f4095bc2-be91-4b88-adee-fb762fd4a421 from this chassis (sb_readonly=0)
Jan 30 23:57:32 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:32Z|00225|binding|INFO|Setting lport f4095bc2-be91-4b88-adee-fb762fd4a421 down in Southbound
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.150 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:32 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:32Z|00226|binding|INFO|Removing iface tapf4095bc2-be ovn-installed in OVS
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.153 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.159 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:32 np0005603435 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 30 23:57:32 np0005603435 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 14.633s CPU time.
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.177 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:02:58 10.100.0.11'], port_security=['fa:16:3e:c9:02:58 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '46143cbc-0ca2-4cea-bc49-98861e82728b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a10d9666-b672-4619-83b7-22dc781b5b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b39f0e168b54a4b8f976894d21361e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31116304-b672-4fa0-88a2-3aca5935fb40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21f14c68-4084-427c-b05e-592b1db029c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=f4095bc2-be91-4b88-adee-fb762fd4a421) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.178 156017 INFO neutron.agent.ovn.metadata.agent [-] Port f4095bc2-be91-4b88-adee-fb762fd4a421 in datapath a10d9666-b672-4619-83b7-22dc781b5b5b unbound from our chassis#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.180 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a10d9666-b672-4619-83b7-22dc781b5b5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.181 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b57d612c-6b96-4f03-a069-c5a9d0dcbc87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.182 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b namespace which is not needed anymore#033[00m
Jan 30 23:57:32 np0005603435 systemd-machined[208030]: Machine qemu-22-instance-00000016 terminated.
Jan 30 23:57:32 np0005603435 podman[267777]: 2026-01-31 04:57:32.249916521 +0000 UTC m=+0.078968549 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 30 23:57:32 np0005603435 podman[267780]: 2026-01-31 04:57:32.294280605 +0000 UTC m=+0.122893142 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.353 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:32 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[267195]: [NOTICE]   (267199) : haproxy version is 2.8.14-c23fe91
Jan 30 23:57:32 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[267195]: [NOTICE]   (267199) : path to executable is /usr/sbin/haproxy
Jan 30 23:57:32 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[267195]: [WARNING]  (267199) : Exiting Master process...
Jan 30 23:57:32 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[267195]: [ALERT]    (267199) : Current worker (267201) exited with code 143 (Terminated)
Jan 30 23:57:32 np0005603435 neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b[267195]: [WARNING]  (267199) : All workers exited. Exiting... (0)
Jan 30 23:57:32 np0005603435 systemd[1]: libpod-abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333.scope: Deactivated successfully.
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.365 239942 INFO nova.virt.libvirt.driver [-] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Instance destroyed successfully.#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.366 239942 DEBUG nova.objects.instance [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lazy-loading 'resources' on Instance uuid 46143cbc-0ca2-4cea-bc49-98861e82728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:57:32 np0005603435 podman[267842]: 2026-01-31 04:57:32.374887902 +0000 UTC m=+0.089198128 container died abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.389 239942 DEBUG nova.virt.libvirt.vif [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-140451816',display_name='tempest-TransferEncryptedVolumeTest-server-140451816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-140451816',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEV56Jk6IDRxyXFlb7xWBOMScnav9Xc5tHSoNY1YUEwOZFWGs8M7XZsrLboufTVEeGeJR0pbnMty3oYNRNpoAOeyFHYNqJJ2N05DBEMeFPzOD6DLoY1LRALz+j5Rp4/1jQ==',key_name='tempest-TransferEncryptedVolumeTest-773774193',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:56:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b39f0e168b54a4b8f976894d21361e6',ramdisk_id='',reservation_id='r-gi0t0ny2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-483286292',owner_user_name='tempest-TransferEncryptedVolumeTest-483286292-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:56:56Z,user_data=None,user_id='27f1a6fb472c4c5fa2286d0fa48dca34',uuid=46143cbc-0ca2-4cea-bc49-98861e82728b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.389 239942 DEBUG nova.network.os_vif_util [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converting VIF {"id": "f4095bc2-be91-4b88-adee-fb762fd4a421", "address": "fa:16:3e:c9:02:58", "network": {"id": "a10d9666-b672-4619-83b7-22dc781b5b5b", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-248373717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b39f0e168b54a4b8f976894d21361e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4095bc2-be", "ovs_interfaceid": "f4095bc2-be91-4b88-adee-fb762fd4a421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.390 239942 DEBUG nova.network.os_vif_util [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:02:58,bridge_name='br-int',has_traffic_filtering=True,id=f4095bc2-be91-4b88-adee-fb762fd4a421,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4095bc2-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.390 239942 DEBUG os_vif [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:02:58,bridge_name='br-int',has_traffic_filtering=True,id=f4095bc2-be91-4b88-adee-fb762fd4a421,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4095bc2-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.391 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.392 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4095bc2-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.394 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.396 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:57:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333-userdata-shm.mount: Deactivated successfully.
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.402 239942 INFO os_vif [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:02:58,bridge_name='br-int',has_traffic_filtering=True,id=f4095bc2-be91-4b88-adee-fb762fd4a421,network=Network(a10d9666-b672-4619-83b7-22dc781b5b5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4095bc2-be')#033[00m
Jan 30 23:57:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a291955539d100f6270098cadc89cfb53b03bed2746b8d4fb1330492598e0c07-merged.mount: Deactivated successfully.
Jan 30 23:57:32 np0005603435 podman[267842]: 2026-01-31 04:57:32.415102704 +0000 UTC m=+0.129412920 container cleanup abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 30 23:57:32 np0005603435 systemd[1]: libpod-conmon-abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333.scope: Deactivated successfully.
Jan 30 23:57:32 np0005603435 podman[267894]: 2026-01-31 04:57:32.483696359 +0000 UTC m=+0.043435971 container remove abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.489 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7089013a-a358-4952-81a1-c618fa530a09]: (4, ('Sat Jan 31 04:57:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b (abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333)\nabb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333\nSat Jan 31 04:57:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b (abb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333)\nabb12726f856798f169b653029a287a844c9de9b03baa0678c8b025f9b2f3333\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.491 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c77c1e27-4687-4fd9-a527-e9c02bec914a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.493 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa10d9666-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.496 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:32 np0005603435 kernel: tapa10d9666-b0: left promiscuous mode
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.503 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.507 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9ab0cd-55a3-46e0-be31-62daf1054bb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.526 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b6350982-852c-48bd-9ebc-040cd279e4db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.528 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[70c21858-d492-48f5-b04d-bfc40e09a5cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.543 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[23ee87b9-57e5-49d9-86df-4fa48f9c4dd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446829, 'reachable_time': 20812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267913, 'error': None, 'target': 'ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.545 239942 DEBUG nova.compute.manager [req-0b2367ec-50a2-4944-8662-809d4253a677 req-daf8333b-e53f-41d7-8a15-8cfb489bd8ec c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received event network-vif-unplugged-f4095bc2-be91-4b88-adee-fb762fd4a421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.545 239942 DEBUG oslo_concurrency.lockutils [req-0b2367ec-50a2-4944-8662-809d4253a677 req-daf8333b-e53f-41d7-8a15-8cfb489bd8ec c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.546 239942 DEBUG oslo_concurrency.lockutils [req-0b2367ec-50a2-4944-8662-809d4253a677 req-daf8333b-e53f-41d7-8a15-8cfb489bd8ec c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:32 np0005603435 systemd[1]: run-netns-ovnmeta\x2da10d9666\x2db672\x2d4619\x2d83b7\x2d22dc781b5b5b.mount: Deactivated successfully.
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.546 239942 DEBUG oslo_concurrency.lockutils [req-0b2367ec-50a2-4944-8662-809d4253a677 req-daf8333b-e53f-41d7-8a15-8cfb489bd8ec c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.547 239942 DEBUG nova.compute.manager [req-0b2367ec-50a2-4944-8662-809d4253a677 req-daf8333b-e53f-41d7-8a15-8cfb489bd8ec c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] No waiting events found dispatching network-vif-unplugged-f4095bc2-be91-4b88-adee-fb762fd4a421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.547 239942 DEBUG nova.compute.manager [req-0b2367ec-50a2-4944-8662-809d4253a677 req-daf8333b-e53f-41d7-8a15-8cfb489bd8ec c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received event network-vif-unplugged-f4095bc2-be91-4b88-adee-fb762fd4a421 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.547 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a10d9666-b672-4619-83b7-22dc781b5b5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:57:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:32.547 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0eb339-6cc5-464b-90b5-3c472c25b10e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.573 239942 INFO nova.virt.libvirt.driver [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Deleting instance files /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b_del#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.575 239942 INFO nova.virt.libvirt.driver [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Deletion of /var/lib/nova/instances/46143cbc-0ca2-4cea-bc49-98861e82728b_del complete#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.629 239942 INFO nova.compute.manager [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.630 239942 DEBUG oslo.service.loopingcall [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.630 239942 DEBUG nova.compute.manager [-] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:57:32 np0005603435 nova_compute[239938]: 2026-01-31 04:57:32.630 239942 DEBUG nova.network.neutron [-] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3013570856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3013570856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3263867805' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:57:33 np0005603435 nova_compute[239938]: 2026-01-31 04:57:33.399 239942 DEBUG nova.network.neutron [-] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:57:33 np0005603435 nova_compute[239938]: 2026-01-31 04:57:33.428 239942 INFO nova.compute.manager [-] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Took 0.80 seconds to deallocate network for instance.#033[00m
Jan 30 23:57:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 20 KiB/s wr, 63 op/s
Jan 30 23:57:33 np0005603435 nova_compute[239938]: 2026-01-31 04:57:33.490 239942 DEBUG nova.compute.manager [req-c6583e46-a72e-4d0e-8ded-dde55b0a27eb req-349282c3-8b6b-4115-8214-637b0cd3b1d7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received event network-vif-deleted-f4095bc2-be91-4b88-adee-fb762fd4a421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:33 np0005603435 nova_compute[239938]: 2026-01-31 04:57:33.606 239942 INFO nova.compute.manager [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:57:33 np0005603435 nova_compute[239938]: 2026-01-31 04:57:33.666 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:33 np0005603435 nova_compute[239938]: 2026-01-31 04:57:33.666 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:33 np0005603435 nova_compute[239938]: 2026-01-31 04:57:33.772 239942 DEBUG oslo_concurrency.processutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Jan 30 23:57:33 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Jan 30 23:57:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:57:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418785289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.333 239942 DEBUG oslo_concurrency.processutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.338 239942 DEBUG nova.compute.provider_tree [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.362 239942 DEBUG nova.scheduler.client.report [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.391 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.425 239942 INFO nova.scheduler.client.report [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Deleted allocations for instance 46143cbc-0ca2-4cea-bc49-98861e82728b#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.548 239942 DEBUG oslo_concurrency.lockutils [None req-50194f32-00a6-4ec2-a33c-2a80a55abe68 27f1a6fb472c4c5fa2286d0fa48dca34 9b39f0e168b54a4b8f976894d21361e6 - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.615 239942 DEBUG nova.compute.manager [req-5797611a-71cb-403c-a13c-adb514fca010 req-ef5154ed-5960-4cce-9147-418bb6272ac0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received event network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.616 239942 DEBUG oslo_concurrency.lockutils [req-5797611a-71cb-403c-a13c-adb514fca010 req-ef5154ed-5960-4cce-9147-418bb6272ac0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.617 239942 DEBUG oslo_concurrency.lockutils [req-5797611a-71cb-403c-a13c-adb514fca010 req-ef5154ed-5960-4cce-9147-418bb6272ac0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.617 239942 DEBUG oslo_concurrency.lockutils [req-5797611a-71cb-403c-a13c-adb514fca010 req-ef5154ed-5960-4cce-9147-418bb6272ac0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "46143cbc-0ca2-4cea-bc49-98861e82728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.618 239942 DEBUG nova.compute.manager [req-5797611a-71cb-403c-a13c-adb514fca010 req-ef5154ed-5960-4cce-9147-418bb6272ac0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] No waiting events found dispatching network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:57:34 np0005603435 nova_compute[239938]: 2026-01-31 04:57:34.618 239942 WARNING nova.compute.manager [req-5797611a-71cb-403c-a13c-adb514fca010 req-ef5154ed-5960-4cce-9147-418bb6272ac0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Received unexpected event network-vif-plugged-f4095bc2-be91-4b88-adee-fb762fd4a421 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:57:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Jan 30 23:57:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Jan 30 23:57:34 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Jan 30 23:57:35 np0005603435 nova_compute[239938]: 2026-01-31 04:57:35.273 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 352 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 169 KiB/s rd, 3.0 KiB/s wr, 48 op/s
Jan 30 23:57:35 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:35.989 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:57:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4047246212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:57:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:57:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:57:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:57:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:57:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:57:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:57:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4045308487' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4045308487' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:37 np0005603435 nova_compute[239938]: 2026-01-31 04:57:37.394 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 312 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 4.9 KiB/s wr, 91 op/s
Jan 30 23:57:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Jan 30 23:57:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Jan 30 23:57:37 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Jan 30 23:57:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Jan 30 23:57:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Jan 30 23:57:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Jan 30 23:57:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 312 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Jan 30 23:57:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Jan 30 23:57:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Jan 30 23:57:39 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Jan 30 23:57:40 np0005603435 nova_compute[239938]: 2026-01-31 04:57:40.276 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:40 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:40Z|00227|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:57:41 np0005603435 nova_compute[239938]: 2026-01-31 04:57:41.048 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 257 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 5.7 KiB/s wr, 119 op/s
Jan 30 23:57:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:57:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3901001715' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:57:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Jan 30 23:57:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Jan 30 23:57:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.397 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.507 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "a3d46698-1b04-4df5-a957-0ba432667ada" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.508 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.535 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.628 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.628 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.638 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.638 239942 INFO nova.compute.claims [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:57:42 np0005603435 nova_compute[239938]: 2026-01-31 04:57:42.774 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Jan 30 23:57:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Jan 30 23:57:42 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Jan 30 23:57:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:57:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931283716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.342 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.349 239942 DEBUG nova.compute.provider_tree [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.382 239942 DEBUG nova.scheduler.client.report [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.418 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.419 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.474 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.475 239942 DEBUG nova.network.neutron [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:57:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 616 KiB/s rd, 21 KiB/s wr, 213 op/s
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.500 239942 INFO nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.525 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.587 239942 INFO nova.virt.block_device [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Booting with volume b655bafa-a97d-41fb-8340-5edc19428628 at /dev/vda#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.744 239942 DEBUG os_brick.utils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.746 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.757 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.758 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6f9435-f6d5-4713-a795-1b82db63e5a9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.759 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.767 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.768 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[00444d43-3659-44af-8b26-8bd05abaa39b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.770 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.778 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.778 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[add2aea7-2b43-4509-863a-34caedeee7b2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.780 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[8a063b25-120a-4735-a88b-916f355ed33a]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.781 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.804 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.807 239942 DEBUG os_brick.initiator.connectors.lightos [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.807 239942 DEBUG os_brick.initiator.connectors.lightos [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.808 239942 DEBUG os_brick.initiator.connectors.lightos [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.809 239942 DEBUG os_brick.utils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:57:43 np0005603435 nova_compute[239938]: 2026-01-31 04:57:43.809 239942 DEBUG nova.virt.block_device [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Updating existing volume attachment record: 585a5209-332c-4144-85e4-5c902df6c49c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:57:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Jan 30 23:57:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Jan 30 23:57:43 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Jan 30 23:57:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2958699035' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2958699035' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:44 np0005603435 nova_compute[239938]: 2026-01-31 04:57:44.413 239942 DEBUG nova.policy [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e10f13b98624406985dec6a5dcc391c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:57:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:57:44 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3249763905' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.015 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.017 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.018 239942 INFO nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Creating image(s)#033[00m
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.018 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.019 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Ensure instance console log exists: /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.019 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.020 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.020 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:45 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:45Z|00228|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.140 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.279 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 551 KiB/s rd, 20 KiB/s wr, 232 op/s
Jan 30 23:57:45 np0005603435 nova_compute[239938]: 2026-01-31 04:57:45.504 239942 DEBUG nova.network.neutron [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Successfully created port: d8f56e56-02d6-43e2-afae-1f5610a67fb9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:57:46 np0005603435 nova_compute[239938]: 2026-01-31 04:57:46.722 239942 DEBUG nova.network.neutron [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Successfully updated port: d8f56e56-02d6-43e2-afae-1f5610a67fb9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:57:46 np0005603435 nova_compute[239938]: 2026-01-31 04:57:46.740 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:57:46 np0005603435 nova_compute[239938]: 2026-01-31 04:57:46.740 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquired lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:57:46 np0005603435 nova_compute[239938]: 2026-01-31 04:57:46.740 239942 DEBUG nova.network.neutron [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:57:46 np0005603435 nova_compute[239938]: 2026-01-31 04:57:46.826 239942 DEBUG nova.compute.manager [req-c2e1c2c5-209e-49bb-88dd-20a18bed8ff7 req-2e4d8fc5-3017-4f74-88e0-281e21c2c59f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-changed-d8f56e56-02d6-43e2-afae-1f5610a67fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:46 np0005603435 nova_compute[239938]: 2026-01-31 04:57:46.827 239942 DEBUG nova.compute.manager [req-c2e1c2c5-209e-49bb-88dd-20a18bed8ff7 req-2e4d8fc5-3017-4f74-88e0-281e21c2c59f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Refreshing instance network info cache due to event network-changed-d8f56e56-02d6-43e2-afae-1f5610a67fb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:57:46 np0005603435 nova_compute[239938]: 2026-01-31 04:57:46.828 239942 DEBUG oslo_concurrency.lockutils [req-c2e1c2c5-209e-49bb-88dd-20a18bed8ff7 req-2e4d8fc5-3017-4f74-88e0-281e21c2c59f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:57:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:57:47 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3556486697' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:57:47 np0005603435 nova_compute[239938]: 2026-01-31 04:57:47.306 239942 DEBUG nova.network.neutron [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:57:47 np0005603435 nova_compute[239938]: 2026-01-31 04:57:47.363 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835452.3623552, 46143cbc-0ca2-4cea-bc49-98861e82728b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:57:47 np0005603435 nova_compute[239938]: 2026-01-31 04:57:47.364 239942 INFO nova.compute.manager [-] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:57:47 np0005603435 nova_compute[239938]: 2026-01-31 04:57:47.386 239942 DEBUG nova.compute.manager [None req-6ecbda94-6372-453c-8305-1c54f7e71742 - - - - - -] [instance: 46143cbc-0ca2-4cea-bc49-98861e82728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:47 np0005603435 nova_compute[239938]: 2026-01-31 04:57:47.406 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 503 KiB/s rd, 19 KiB/s wr, 209 op/s
Jan 30 23:57:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Jan 30 23:57:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Jan 30 23:57:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Jan 30 23:57:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Jan 30 23:57:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Jan 30 23:57:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Jan 30 23:57:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:57:48 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 7097 writes, 32K keys, 7097 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7097 writes, 7097 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2071 writes, 9721 keys, 2071 commit groups, 1.0 writes per commit group, ingest: 12.25 MB, 0.02 MB/s#012Interval WAL: 2071 writes, 2071 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.0      0.64              0.11        17    0.038       0      0       0.0       0.0#012  L6      1/0    9.99 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    104.0     85.9      1.50              0.39        16    0.094     80K   9491       0.0       0.0#012 Sum      1/0    9.99 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     72.9     77.6      2.14              0.50        33    0.065     80K   9491       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.8     82.2     86.1      0.71              0.18        10    0.071     31K   3677       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    104.0     85.9      1.50              0.39        16    0.094     80K   9491       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.4      0.61              0.11        16    0.038       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.1      0.03              0.00         1    0.027       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.1 total, 600.0 interval#012Flush(GB): cumulative 0.036, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.06 MB/s read, 2.1 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5573585118d0#2 capacity: 304.00 MB usage: 18.49 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000216 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1196,17.81 MB,5.85799%) FilterBlock(34,236.86 KB,0.0760882%) IndexBlock(34,460.73 KB,0.148005%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.665 239942 DEBUG nova.network.neutron [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Updating instance_info_cache with network_info: [{"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.683 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Releasing lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.683 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Instance network_info: |[{"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.684 239942 DEBUG oslo_concurrency.lockutils [req-c2e1c2c5-209e-49bb-88dd-20a18bed8ff7 req-2e4d8fc5-3017-4f74-88e0-281e21c2c59f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.684 239942 DEBUG nova.network.neutron [req-c2e1c2c5-209e-49bb-88dd-20a18bed8ff7 req-2e4d8fc5-3017-4f74-88e0-281e21c2c59f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Refreshing network info cache for port d8f56e56-02d6-43e2-afae-1f5610a67fb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.690 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Start _get_guest_xml network_info=[{"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '585a5209-332c-4144-85e4-5c902df6c49c', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b655bafa-a97d-41fb-8340-5edc19428628', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b655bafa-a97d-41fb-8340-5edc19428628', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a3d46698-1b04-4df5-a957-0ba432667ada', 'attached_at': '', 'detached_at': '', 'volume_id': 'b655bafa-a97d-41fb-8340-5edc19428628', 'serial': 'b655bafa-a97d-41fb-8340-5edc19428628'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.696 239942 WARNING nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.708 239942 DEBUG nova.virt.libvirt.host [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.709 239942 DEBUG nova.virt.libvirt.host [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.713 239942 DEBUG nova.virt.libvirt.host [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.714 239942 DEBUG nova.virt.libvirt.host [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.715 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.716 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.716 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.717 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.717 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.718 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.718 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.719 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.719 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.720 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.720 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.720 239942 DEBUG nova.virt.hardware [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.755 239942 DEBUG nova.storage.rbd_utils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image a3d46698-1b04-4df5-a957-0ba432667ada_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:57:48 np0005603435 nova_compute[239938]: 2026-01-31 04:57:48.761 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:57:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565060658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.332 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.370 239942 DEBUG nova.virt.libvirt.vif [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:57:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-92792047',display_name='tempest-TestVolumeBootPattern-server-92792047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-92792047',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-i88d4whl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:57:43Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=a3d46698-1b04-4df5-a957-0ba432667ada,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.371 239942 DEBUG nova.network.os_vif_util [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.372 239942 DEBUG nova.network.os_vif_util [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:64:15,bridge_name='br-int',has_traffic_filtering=True,id=d8f56e56-02d6-43e2-afae-1f5610a67fb9,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f56e56-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.374 239942 DEBUG nova.objects.instance [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid a3d46698-1b04-4df5-a957-0ba432667ada obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.396 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <uuid>a3d46698-1b04-4df5-a957-0ba432667ada</uuid>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <name>instance-00000018</name>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestVolumeBootPattern-server-92792047</nova:name>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:57:48</nova:creationTime>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <nova:user uuid="e10f13b98624406985dec6a5dcc391c7">tempest-TestVolumeBootPattern-1782423025-project-member</nova:user>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <nova:project uuid="e332802dd6cf49c59f8ed38e70addb0e">tempest-TestVolumeBootPattern-1782423025</nova:project>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <nova:port uuid="d8f56e56-02d6-43e2-afae-1f5610a67fb9">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <entry name="serial">a3d46698-1b04-4df5-a957-0ba432667ada</entry>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <entry name="uuid">a3d46698-1b04-4df5-a957-0ba432667ada</entry>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/a3d46698-1b04-4df5-a957-0ba432667ada_disk.config">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-b655bafa-a97d-41fb-8340-5edc19428628">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <serial>b655bafa-a97d-41fb-8340-5edc19428628</serial>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:44:64:15"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <target dev="tapd8f56e56-02"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada/console.log" append="off"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:57:49 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:57:49 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:57:49 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:57:49 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.397 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Preparing to wait for external event network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.398 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.398 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.399 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.400 239942 DEBUG nova.virt.libvirt.vif [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:57:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-92792047',display_name='tempest-TestVolumeBootPattern-server-92792047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-92792047',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-i88d4whl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:57:43Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=a3d46698-1b04-4df5-a957-0ba432667ada,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.400 239942 DEBUG nova.network.os_vif_util [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.401 239942 DEBUG nova.network.os_vif_util [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:64:15,bridge_name='br-int',has_traffic_filtering=True,id=d8f56e56-02d6-43e2-afae-1f5610a67fb9,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f56e56-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.402 239942 DEBUG os_vif [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:64:15,bridge_name='br-int',has_traffic_filtering=True,id=d8f56e56-02d6-43e2-afae-1f5610a67fb9,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f56e56-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.403 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.404 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.404 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.407 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.408 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8f56e56-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.409 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8f56e56-02, col_values=(('external_ids', {'iface-id': 'd8f56e56-02d6-43e2-afae-1f5610a67fb9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:64:15', 'vm-uuid': 'a3d46698-1b04-4df5-a957-0ba432667ada'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:49 np0005603435 NetworkManager[49097]: <info>  [1769835469.4115] manager: (tapd8f56e56-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.410 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.413 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.417 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.418 239942 INFO os_vif [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:64:15,bridge_name='br-int',has_traffic_filtering=True,id=d8f56e56-02d6-43e2-afae-1f5610a67fb9,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f56e56-02')#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.473 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.474 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.474 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] No VIF found with MAC fa:16:3e:44:64:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.474 239942 INFO nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Using config drive#033[00m
Jan 30 23:57:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 102 op/s
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.496 239942 DEBUG nova.storage.rbd_utils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image a3d46698-1b04-4df5-a957-0ba432667ada_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.837 239942 INFO nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Creating config drive at /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada/disk.config#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.843 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_3q3nhhj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:49 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:49Z|00229|binding|INFO|Releasing lport 07e657c3-16d2-4095-9f39-32a275cb472e from this chassis (sb_readonly=0)
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.920 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:49 np0005603435 nova_compute[239938]: 2026-01-31 04:57:49.974 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_3q3nhhj" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.007 239942 DEBUG nova.storage.rbd_utils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] rbd image a3d46698-1b04-4df5-a957-0ba432667ada_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.011 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada/disk.config a3d46698-1b04-4df5-a957-0ba432667ada_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.162 239942 DEBUG oslo_concurrency.processutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada/disk.config a3d46698-1b04-4df5-a957-0ba432667ada_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.164 239942 INFO nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Deleting local config drive /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada/disk.config because it was imported into RBD.#033[00m
Jan 30 23:57:50 np0005603435 kernel: tapd8f56e56-02: entered promiscuous mode
Jan 30 23:57:50 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:50Z|00230|binding|INFO|Claiming lport d8f56e56-02d6-43e2-afae-1f5610a67fb9 for this chassis.
Jan 30 23:57:50 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:50Z|00231|binding|INFO|d8f56e56-02d6-43e2-afae-1f5610a67fb9: Claiming fa:16:3e:44:64:15 10.100.0.13
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.216 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:50 np0005603435 NetworkManager[49097]: <info>  [1769835470.2180] manager: (tapd8f56e56-02): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.225 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:64:15 10.100.0.13'], port_security=['fa:16:3e:44:64:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a3d46698-1b04-4df5-a957-0ba432667ada', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5925722f-3c3e-42bd-9802-ef7105d62a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=d8f56e56-02d6-43e2-afae-1f5610a67fb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.227 156017 INFO neutron.agent.ovn.metadata.agent [-] Port d8f56e56-02d6-43e2-afae-1f5610a67fb9 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 bound to our chassis#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.230 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:57:50 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:50Z|00232|binding|INFO|Setting lport d8f56e56-02d6-43e2-afae-1f5610a67fb9 ovn-installed in OVS
Jan 30 23:57:50 np0005603435 ovn_controller[145670]: 2026-01-31T04:57:50Z|00233|binding|INFO|Setting lport d8f56e56-02d6-43e2-afae-1f5610a67fb9 up in Southbound
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.233 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.248 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b43f1535-ce80-4288-9328-7e14972f9d19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:50 np0005603435 systemd-machined[208030]: New machine qemu-24-instance-00000018.
Jan 30 23:57:50 np0005603435 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.279 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[d337fe8b-758e-4f6b-a761-765070369187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:50 np0005603435 systemd-udevd[268082]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.281 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.284 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[263fc34a-e926-4a28-8a2f-aef4378e85d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:50 np0005603435 NetworkManager[49097]: <info>  [1769835470.2941] device (tapd8f56e56-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:57:50 np0005603435 NetworkManager[49097]: <info>  [1769835470.2949] device (tapd8f56e56-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.316 239942 DEBUG nova.network.neutron [req-c2e1c2c5-209e-49bb-88dd-20a18bed8ff7 req-2e4d8fc5-3017-4f74-88e0-281e21c2c59f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Updated VIF entry in instance network info cache for port d8f56e56-02d6-43e2-afae-1f5610a67fb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.317 239942 DEBUG nova.network.neutron [req-c2e1c2c5-209e-49bb-88dd-20a18bed8ff7 req-2e4d8fc5-3017-4f74-88e0-281e21c2c59f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Updating instance_info_cache with network_info: [{"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.322 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[fca52850-0b0c-486f-befa-ec18d6d0f6cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.339 239942 DEBUG oslo_concurrency.lockutils [req-c2e1c2c5-209e-49bb-88dd-20a18bed8ff7 req-2e4d8fc5-3017-4f74-88e0-281e21c2c59f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.342 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b67e53-f37b-420f-86e0-90334b30c260]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447445, 'reachable_time': 41879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268092, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.358 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e8332eb1-b2a4-4b34-841f-086d80a2e2ae]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b0cf2db-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447458, 'tstamp': 447458}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268093, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b0cf2db-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447460, 'tstamp': 447460}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268093, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.360 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.362 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.363 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.364 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.364 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.365 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:57:50 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:50.365 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.596 239942 DEBUG nova.compute.manager [req-f96632a4-1eee-4433-b2f4-ea706907aec6 req-e4fc58d8-f802-4bc1-95a2-814835f3c666 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.597 239942 DEBUG oslo_concurrency.lockutils [req-f96632a4-1eee-4433-b2f4-ea706907aec6 req-e4fc58d8-f802-4bc1-95a2-814835f3c666 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.597 239942 DEBUG oslo_concurrency.lockutils [req-f96632a4-1eee-4433-b2f4-ea706907aec6 req-e4fc58d8-f802-4bc1-95a2-814835f3c666 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.598 239942 DEBUG oslo_concurrency.lockutils [req-f96632a4-1eee-4433-b2f4-ea706907aec6 req-e4fc58d8-f802-4bc1-95a2-814835f3c666 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.598 239942 DEBUG nova.compute.manager [req-f96632a4-1eee-4433-b2f4-ea706907aec6 req-e4fc58d8-f802-4bc1-95a2-814835f3c666 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Processing event network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.663 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.664 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835470.663791, a3d46698-1b04-4df5-a957-0ba432667ada => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.664 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] VM Started (Lifecycle Event)#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.670 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.674 239942 INFO nova.virt.libvirt.driver [-] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Instance spawned successfully.#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.674 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.686 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.694 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.699 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.699 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.700 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.700 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.700 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.701 239942 DEBUG nova.virt.libvirt.driver [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.732 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.732 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835470.6647706, a3d46698-1b04-4df5-a957-0ba432667ada => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.733 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.762 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.766 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835470.6690001, a3d46698-1b04-4df5-a957-0ba432667ada => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.766 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.776 239942 INFO nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Took 5.76 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.777 239942 DEBUG nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.787 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.793 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.821 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.861 239942 INFO nova.compute.manager [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Took 8.27 seconds to build instance.#033[00m
Jan 30 23:57:50 np0005603435 nova_compute[239938]: 2026-01-31 04:57:50.882 239942 DEBUG oslo_concurrency.lockutils [None req-2198155f-5525-45a9-b0ce-bb2faee54ac6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Jan 30 23:57:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Jan 30 23:57:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Jan 30 23:57:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2829660219' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2829660219' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 68 op/s
Jan 30 23:57:52 np0005603435 nova_compute[239938]: 2026-01-31 04:57:52.686 239942 DEBUG nova.compute.manager [req-d3b94558-6b8f-413e-8778-de309676054b req-2120d539-d5fa-44b5-8691-d5a36185d5dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:52 np0005603435 nova_compute[239938]: 2026-01-31 04:57:52.687 239942 DEBUG oslo_concurrency.lockutils [req-d3b94558-6b8f-413e-8778-de309676054b req-2120d539-d5fa-44b5-8691-d5a36185d5dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:52 np0005603435 nova_compute[239938]: 2026-01-31 04:57:52.688 239942 DEBUG oslo_concurrency.lockutils [req-d3b94558-6b8f-413e-8778-de309676054b req-2120d539-d5fa-44b5-8691-d5a36185d5dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:52 np0005603435 nova_compute[239938]: 2026-01-31 04:57:52.688 239942 DEBUG oslo_concurrency.lockutils [req-d3b94558-6b8f-413e-8778-de309676054b req-2120d539-d5fa-44b5-8691-d5a36185d5dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:52 np0005603435 nova_compute[239938]: 2026-01-31 04:57:52.689 239942 DEBUG nova.compute.manager [req-d3b94558-6b8f-413e-8778-de309676054b req-2120d539-d5fa-44b5-8691-d5a36185d5dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] No waiting events found dispatching network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:57:52 np0005603435 nova_compute[239938]: 2026-01-31 04:57:52.689 239942 WARNING nova.compute.manager [req-d3b94558-6b8f-413e-8778-de309676054b req-2120d539-d5fa-44b5-8691-d5a36185d5dc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received unexpected event network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:57:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 28 KiB/s wr, 127 op/s
Jan 30 23:57:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:57:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902479681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:57:54 np0005603435 nova_compute[239938]: 2026-01-31 04:57:54.411 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Jan 30 23:57:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Jan 30 23:57:54 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Jan 30 23:57:55 np0005603435 nova_compute[239938]: 2026-01-31 04:57:55.285 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 169 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 24 KiB/s wr, 155 op/s
Jan 30 23:57:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Jan 30 23:57:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Jan 30 23:57:55 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Jan 30 23:57:55 np0005603435 nova_compute[239938]: 2026-01-31 04:57:55.698 239942 DEBUG nova.compute.manager [req-1a5eb46a-2b82-4da6-9f56-9d3809548a0c req-47984b0d-f40a-49be-80d3-c45674bc2808 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-changed-d8f56e56-02d6-43e2-afae-1f5610a67fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:57:55 np0005603435 nova_compute[239938]: 2026-01-31 04:57:55.699 239942 DEBUG nova.compute.manager [req-1a5eb46a-2b82-4da6-9f56-9d3809548a0c req-47984b0d-f40a-49be-80d3-c45674bc2808 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Refreshing instance network info cache due to event network-changed-d8f56e56-02d6-43e2-afae-1f5610a67fb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:57:55 np0005603435 nova_compute[239938]: 2026-01-31 04:57:55.699 239942 DEBUG oslo_concurrency.lockutils [req-1a5eb46a-2b82-4da6-9f56-9d3809548a0c req-47984b0d-f40a-49be-80d3-c45674bc2808 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:57:55 np0005603435 nova_compute[239938]: 2026-01-31 04:57:55.699 239942 DEBUG oslo_concurrency.lockutils [req-1a5eb46a-2b82-4da6-9f56-9d3809548a0c req-47984b0d-f40a-49be-80d3-c45674bc2808 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:57:55 np0005603435 nova_compute[239938]: 2026-01-31 04:57:55.699 239942 DEBUG nova.network.neutron [req-1a5eb46a-2b82-4da6-9f56-9d3809548a0c req-47984b0d-f40a-49be-80d3-c45674bc2808 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Refreshing network info cache for port d8f56e56-02d6-43e2-afae-1f5610a67fb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:57:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:55.922 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:57:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:55.923 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:57:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:57:55.923 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:57:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Jan 30 23:57:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Jan 30 23:57:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Jan 30 23:57:56 np0005603435 nova_compute[239938]: 2026-01-31 04:57:56.826 239942 DEBUG nova.network.neutron [req-1a5eb46a-2b82-4da6-9f56-9d3809548a0c req-47984b0d-f40a-49be-80d3-c45674bc2808 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Updated VIF entry in instance network info cache for port d8f56e56-02d6-43e2-afae-1f5610a67fb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:57:56 np0005603435 nova_compute[239938]: 2026-01-31 04:57:56.829 239942 DEBUG nova.network.neutron [req-1a5eb46a-2b82-4da6-9f56-9d3809548a0c req-47984b0d-f40a-49be-80d3-c45674bc2808 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Updating instance_info_cache with network_info: [{"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:57:56 np0005603435 nova_compute[239938]: 2026-01-31 04:57:56.856 239942 DEBUG oslo_concurrency.lockutils [req-1a5eb46a-2b82-4da6-9f56-9d3809548a0c req-47984b0d-f40a-49be-80d3-c45674bc2808 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:57:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:57:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3501644792' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:57:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:57:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3501644792' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:57:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 266 op/s
Jan 30 23:57:57 np0005603435 podman[268282]: 2026-01-31 04:57:57.577219953 +0000 UTC m=+0.060468107 container create 14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:57:57 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:57:57 np0005603435 systemd[1]: Started libpod-conmon-14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852.scope.
Jan 30 23:57:57 np0005603435 podman[268282]: 2026-01-31 04:57:57.547776314 +0000 UTC m=+0.031024498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:57:57 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:57:57 np0005603435 podman[268282]: 2026-01-31 04:57:57.666878932 +0000 UTC m=+0.150127106 container init 14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 30 23:57:57 np0005603435 podman[268282]: 2026-01-31 04:57:57.674716753 +0000 UTC m=+0.157964907 container start 14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:57:57 np0005603435 podman[268282]: 2026-01-31 04:57:57.679355077 +0000 UTC m=+0.162603241 container attach 14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 30 23:57:57 np0005603435 kind_bohr[268299]: 167 167
Jan 30 23:57:57 np0005603435 systemd[1]: libpod-14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852.scope: Deactivated successfully.
Jan 30 23:57:57 np0005603435 conmon[268299]: conmon 14b58d05c0777bbcfa8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852.scope/container/memory.events
Jan 30 23:57:57 np0005603435 podman[268282]: 2026-01-31 04:57:57.686116452 +0000 UTC m=+0.169364606 container died 14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 30 23:57:57 np0005603435 systemd[1]: var-lib-containers-storage-overlay-31a0b8753f7fcdd38fb37778f2c0adf7db3112bb828180fd6b0d0dcc55064e98-merged.mount: Deactivated successfully.
Jan 30 23:57:57 np0005603435 podman[268282]: 2026-01-31 04:57:57.722153392 +0000 UTC m=+0.205401536 container remove 14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 30 23:57:57 np0005603435 systemd[1]: libpod-conmon-14b58d05c0777bbcfa8d83958222f234586aff3bdc00e851eb839f7d00105852.scope: Deactivated successfully.
Jan 30 23:57:57 np0005603435 podman[268321]: 2026-01-31 04:57:57.926190613 +0000 UTC m=+0.064750162 container create c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:57:57 np0005603435 systemd[1]: Started libpod-conmon-c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5.scope.
Jan 30 23:57:57 np0005603435 podman[268321]: 2026-01-31 04:57:57.898676081 +0000 UTC m=+0.037235670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:57:57 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:57:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ddbd18024efb7ae19d9128a966a58ac1984dd90e3a14f9386d8bbb863964add/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ddbd18024efb7ae19d9128a966a58ac1984dd90e3a14f9386d8bbb863964add/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ddbd18024efb7ae19d9128a966a58ac1984dd90e3a14f9386d8bbb863964add/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ddbd18024efb7ae19d9128a966a58ac1984dd90e3a14f9386d8bbb863964add/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:58 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ddbd18024efb7ae19d9128a966a58ac1984dd90e3a14f9386d8bbb863964add/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:58 np0005603435 podman[268321]: 2026-01-31 04:57:58.019424759 +0000 UTC m=+0.157984368 container init c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 30 23:57:58 np0005603435 podman[268321]: 2026-01-31 04:57:58.026283307 +0000 UTC m=+0.164842856 container start c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:57:58 np0005603435 podman[268321]: 2026-01-31 04:57:58.031479564 +0000 UTC m=+0.170039173 container attach c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 30 23:57:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:57:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Jan 30 23:57:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Jan 30 23:57:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Jan 30 23:57:58 np0005603435 heuristic_fermat[268338]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:57:58 np0005603435 heuristic_fermat[268338]: --> All data devices are unavailable
Jan 30 23:57:58 np0005603435 systemd[1]: libpod-c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5.scope: Deactivated successfully.
Jan 30 23:57:58 np0005603435 podman[268321]: 2026-01-31 04:57:58.598797155 +0000 UTC m=+0.737356664 container died c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:57:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0ddbd18024efb7ae19d9128a966a58ac1984dd90e3a14f9386d8bbb863964add-merged.mount: Deactivated successfully.
Jan 30 23:57:58 np0005603435 podman[268321]: 2026-01-31 04:57:58.646592582 +0000 UTC m=+0.785152091 container remove c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_fermat, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 30 23:57:58 np0005603435 systemd[1]: libpod-conmon-c64cd58b74cb04ee22caaaf5d52d2284ff088809e33ee706b52a50d0e806cce5.scope: Deactivated successfully.
Jan 30 23:57:58 np0005603435 nova_compute[239938]: 2026-01-31 04:57:58.932 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:59 np0005603435 podman[268434]: 2026-01-31 04:57:59.117022657 +0000 UTC m=+0.055560777 container create 95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:57:59 np0005603435 systemd[1]: Started libpod-conmon-95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce.scope.
Jan 30 23:57:59 np0005603435 podman[268434]: 2026-01-31 04:57:59.092402556 +0000 UTC m=+0.030940726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:57:59 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:57:59 np0005603435 podman[268434]: 2026-01-31 04:57:59.217857729 +0000 UTC m=+0.156395819 container init 95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:57:59 np0005603435 podman[268434]: 2026-01-31 04:57:59.227212917 +0000 UTC m=+0.165751027 container start 95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 30 23:57:59 np0005603435 podman[268434]: 2026-01-31 04:57:59.233535142 +0000 UTC m=+0.172073222 container attach 95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:57:59 np0005603435 jolly_sanderson[268450]: 167 167
Jan 30 23:57:59 np0005603435 systemd[1]: libpod-95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce.scope: Deactivated successfully.
Jan 30 23:57:59 np0005603435 conmon[268450]: conmon 95dca1f8cab9ef89d6cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce.scope/container/memory.events
Jan 30 23:57:59 np0005603435 podman[268434]: 2026-01-31 04:57:59.237125739 +0000 UTC m=+0.175663859 container died 95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 30 23:57:59 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d1afc40ce3b8343fc623c03276ced0349cb11f97163f7fae71a1ef10eedca0fa-merged.mount: Deactivated successfully.
Jan 30 23:57:59 np0005603435 podman[268434]: 2026-01-31 04:57:59.278370467 +0000 UTC m=+0.216908577 container remove 95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 30 23:57:59 np0005603435 systemd[1]: libpod-conmon-95dca1f8cab9ef89d6cd08f61cfcea01b4e18f777e6ecbfa561bb08e2fcb97ce.scope: Deactivated successfully.
Jan 30 23:57:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:57:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2398706503' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:57:59 np0005603435 nova_compute[239938]: 2026-01-31 04:57:59.413 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:57:59 np0005603435 podman[268475]: 2026-01-31 04:57:59.437628045 +0000 UTC m=+0.057853604 container create d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_benz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:57:59 np0005603435 systemd[1]: Started libpod-conmon-d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718.scope.
Jan 30 23:57:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 KiB/s wr, 192 op/s
Jan 30 23:57:59 np0005603435 podman[268475]: 2026-01-31 04:57:59.411867756 +0000 UTC m=+0.032093415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:57:59 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:57:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1f7cd75e264a10f57b8e15a20da068d5559cdb8722046675a3f2d1ccdc5b3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1f7cd75e264a10f57b8e15a20da068d5559cdb8722046675a3f2d1ccdc5b3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1f7cd75e264a10f57b8e15a20da068d5559cdb8722046675a3f2d1ccdc5b3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:59 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1f7cd75e264a10f57b8e15a20da068d5559cdb8722046675a3f2d1ccdc5b3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:57:59 np0005603435 podman[268475]: 2026-01-31 04:57:59.551418003 +0000 UTC m=+0.171643652 container init d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_benz, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 30 23:57:59 np0005603435 podman[268475]: 2026-01-31 04:57:59.562918234 +0000 UTC m=+0.183143793 container start d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 30 23:57:59 np0005603435 podman[268475]: 2026-01-31 04:57:59.566617904 +0000 UTC m=+0.186843543 container attach d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]: {
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:    "0": [
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:        {
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "devices": [
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "/dev/loop3"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            ],
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_name": "ceph_lv0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_size": "21470642176",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "name": "ceph_lv0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "tags": {
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cluster_name": "ceph",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.crush_device_class": "",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.encrypted": "0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.objectstore": "bluestore",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osd_id": "0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.type": "block",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.vdo": "0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.with_tpm": "0"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            },
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "type": "block",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "vg_name": "ceph_vg0"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:        }
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:    ],
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:    "1": [
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:        {
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "devices": [
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "/dev/loop4"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            ],
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_name": "ceph_lv1",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_size": "21470642176",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "name": "ceph_lv1",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "tags": {
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cluster_name": "ceph",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.crush_device_class": "",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.encrypted": "0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.objectstore": "bluestore",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osd_id": "1",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.type": "block",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.vdo": "0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.with_tpm": "0"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            },
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "type": "block",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "vg_name": "ceph_vg1"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:        }
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:    ],
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:    "2": [
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:        {
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "devices": [
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "/dev/loop5"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            ],
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_name": "ceph_lv2",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_size": "21470642176",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "name": "ceph_lv2",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "tags": {
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.cluster_name": "ceph",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.crush_device_class": "",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.encrypted": "0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.objectstore": "bluestore",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osd_id": "2",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.type": "block",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.vdo": "0",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:                "ceph.with_tpm": "0"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            },
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "type": "block",
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:            "vg_name": "ceph_vg2"
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:        }
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]:    ]
Jan 30 23:57:59 np0005603435 compassionate_benz[268491]: }
Jan 30 23:57:59 np0005603435 systemd[1]: libpod-d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718.scope: Deactivated successfully.
Jan 30 23:57:59 np0005603435 podman[268475]: 2026-01-31 04:57:59.884448074 +0000 UTC m=+0.504673643 container died d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_benz, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:57:59 np0005603435 systemd[1]: var-lib-containers-storage-overlay-ff1f7cd75e264a10f57b8e15a20da068d5559cdb8722046675a3f2d1ccdc5b3c-merged.mount: Deactivated successfully.
Jan 30 23:57:59 np0005603435 podman[268475]: 2026-01-31 04:57:59.924694726 +0000 UTC m=+0.544920285 container remove d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_benz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:57:59 np0005603435 systemd[1]: libpod-conmon-d74893f32f6ba9253635dc626b35746bba401adf288cda50e01c02fbb33dc718.scope: Deactivated successfully.
Jan 30 23:58:00 np0005603435 nova_compute[239938]: 2026-01-31 04:58:00.286 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:00 np0005603435 podman[268577]: 2026-01-31 04:58:00.417448106 +0000 UTC m=+0.064960847 container create b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_maxwell, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:58:00 np0005603435 systemd[1]: Started libpod-conmon-b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8.scope.
Jan 30 23:58:00 np0005603435 podman[268577]: 2026-01-31 04:58:00.387134286 +0000 UTC m=+0.034647087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:58:00 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:58:00 np0005603435 podman[268577]: 2026-01-31 04:58:00.500897403 +0000 UTC m=+0.148410194 container init b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_maxwell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:58:00 np0005603435 podman[268577]: 2026-01-31 04:58:00.508927869 +0000 UTC m=+0.156440620 container start b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 30 23:58:00 np0005603435 beautiful_maxwell[268594]: 167 167
Jan 30 23:58:00 np0005603435 systemd[1]: libpod-b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8.scope: Deactivated successfully.
Jan 30 23:58:00 np0005603435 podman[268577]: 2026-01-31 04:58:00.512728312 +0000 UTC m=+0.160241063 container attach b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:58:00 np0005603435 conmon[268594]: conmon b309bb0f081dad18b445 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8.scope/container/memory.events
Jan 30 23:58:00 np0005603435 podman[268577]: 2026-01-31 04:58:00.51552077 +0000 UTC m=+0.163033481 container died b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:58:00 np0005603435 systemd[1]: var-lib-containers-storage-overlay-9a114b884a023f8686fd757270116159f3e0332dc1e08388b3b527f701d769ba-merged.mount: Deactivated successfully.
Jan 30 23:58:00 np0005603435 podman[268577]: 2026-01-31 04:58:00.551214262 +0000 UTC m=+0.198726973 container remove b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Jan 30 23:58:00 np0005603435 systemd[1]: libpod-conmon-b309bb0f081dad18b445c0f2a307b0d7a889be46ed5f2aa604206ec9c8759ae8.scope: Deactivated successfully.
Jan 30 23:58:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Jan 30 23:58:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Jan 30 23:58:00 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Jan 30 23:58:00 np0005603435 podman[268616]: 2026-01-31 04:58:00.701997563 +0000 UTC m=+0.047533501 container create 505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:58:00 np0005603435 systemd[1]: Started libpod-conmon-505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935.scope.
Jan 30 23:58:00 np0005603435 podman[268616]: 2026-01-31 04:58:00.686792952 +0000 UTC m=+0.032328870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:58:00 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:58:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9bdcebd02b0c7b47325d75bb4c80120aa924021335240828a6ed9e69a45237/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:58:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9bdcebd02b0c7b47325d75bb4c80120aa924021335240828a6ed9e69a45237/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:58:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9bdcebd02b0c7b47325d75bb4c80120aa924021335240828a6ed9e69a45237/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:58:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9bdcebd02b0c7b47325d75bb4c80120aa924021335240828a6ed9e69a45237/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:58:00 np0005603435 podman[268616]: 2026-01-31 04:58:00.809280332 +0000 UTC m=+0.154816310 container init 505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:58:00 np0005603435 podman[268616]: 2026-01-31 04:58:00.822402183 +0000 UTC m=+0.167938111 container start 505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 30 23:58:00 np0005603435 podman[268616]: 2026-01-31 04:58:00.826460932 +0000 UTC m=+0.171996870 container attach 505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:58:01 np0005603435 lvm[268711]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:58:01 np0005603435 lvm[268710]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:58:01 np0005603435 lvm[268711]: VG ceph_vg1 finished
Jan 30 23:58:01 np0005603435 lvm[268710]: VG ceph_vg0 finished
Jan 30 23:58:01 np0005603435 lvm[268713]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:58:01 np0005603435 lvm[268713]: VG ceph_vg2 finished
Jan 30 23:58:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 169 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 KiB/s wr, 109 op/s
Jan 30 23:58:01 np0005603435 stoic_hypatia[268632]: {}
Jan 30 23:58:01 np0005603435 systemd[1]: libpod-505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935.scope: Deactivated successfully.
Jan 30 23:58:01 np0005603435 podman[268616]: 2026-01-31 04:58:01.61727622 +0000 UTC m=+0.962812148 container died 505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:58:01 np0005603435 systemd[1]: libpod-505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935.scope: Consumed 1.135s CPU time.
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Jan 30 23:58:01 np0005603435 systemd[1]: var-lib-containers-storage-overlay-8b9bdcebd02b0c7b47325d75bb4c80120aa924021335240828a6ed9e69a45237-merged.mount: Deactivated successfully.
Jan 30 23:58:01 np0005603435 podman[268616]: 2026-01-31 04:58:01.683600409 +0000 UTC m=+1.029136317 container remove 505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:58:01 np0005603435 systemd[1]: libpod-conmon-505d8c755c76362e154233323abec9bf62b329b0e897378beaaa87ec20214935.scope: Deactivated successfully.
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2579637553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2579637553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:02 np0005603435 nova_compute[239938]: 2026-01-31 04:58:02.563 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:02 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:58:02 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:58:02 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:02Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.13
Jan 30 23:58:02 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:02Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:44:64:15 10.100.0.13
Jan 30 23:58:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Jan 30 23:58:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Jan 30 23:58:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Jan 30 23:58:03 np0005603435 podman[268754]: 2026-01-31 04:58:03.106855527 +0000 UTC m=+0.065527201 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 30 23:58:03 np0005603435 podman[268755]: 2026-01-31 04:58:03.122921409 +0000 UTC m=+0.082451284 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 30 23:58:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 175 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 207 KiB/s wr, 141 op/s
Jan 30 23:58:04 np0005603435 nova_compute[239938]: 2026-01-31 04:58:04.415 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Jan 30 23:58:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Jan 30 23:58:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Jan 30 23:58:05 np0005603435 nova_compute[239938]: 2026-01-31 04:58:05.288 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 183 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 201 op/s
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:58:06
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['vms', '.rgw.root', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:58:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Jan 30 23:58:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Jan 30 23:58:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:58:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:58:07 np0005603435 nova_compute[239938]: 2026-01-31 04:58:07.203 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 187 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 204 op/s
Jan 30 23:58:07 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:07Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:44:64:15 10.100.0.13
Jan 30 23:58:07 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:07Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:64:15 10.100.0.13
Jan 30 23:58:07 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:07Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:64:15 10.100.0.13
Jan 30 23:58:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Jan 30 23:58:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Jan 30 23:58:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Jan 30 23:58:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:58:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:58:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:58:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:58:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:58:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Jan 30 23:58:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Jan 30 23:58:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Jan 30 23:58:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:58:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:58:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:58:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:58:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:58:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3159425239' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:08 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3159425239' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:09 np0005603435 nova_compute[239938]: 2026-01-31 04:58:09.417 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 187 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 MiB/s wr, 196 op/s
Jan 30 23:58:10 np0005603435 nova_compute[239938]: 2026-01-31 04:58:10.291 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 187 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 431 KiB/s rd, 163 KiB/s wr, 107 op/s
Jan 30 23:58:12 np0005603435 nova_compute[239938]: 2026-01-31 04:58:12.013 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 187 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 145 KiB/s wr, 72 op/s
Jan 30 23:58:14 np0005603435 nova_compute[239938]: 2026-01-31 04:58:14.419 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:15 np0005603435 nova_compute[239938]: 2026-01-31 04:58:15.294 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 187 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 5.1 KiB/s wr, 58 op/s
Jan 30 23:58:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:58:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3490495889' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.1868888903170303e-05 of space, bias 1.0, pg target 0.003560666670951091 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00122775506436099 of space, bias 1.0, pg target 0.36832651930829696 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.010387415934674e-06 of space, bias 1.0, pg target 0.0003031162247804022 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006673170221250116 of space, bias 1.0, pg target 0.20019510663750348 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.234655111106321e-07 of space, bias 4.0, pg target 0.0009881586133327585 quantized to 16 (current 16)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:58:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 187 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 16 KiB/s wr, 51 op/s
Jan 30 23:58:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Jan 30 23:58:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Jan 30 23:58:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Jan 30 23:58:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Jan 30 23:58:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Jan 30 23:58:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Jan 30 23:58:19 np0005603435 nova_compute[239938]: 2026-01-31 04:58:19.420 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 187 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 19 KiB/s wr, 58 op/s
Jan 30 23:58:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Jan 30 23:58:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Jan 30 23:58:19 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.297 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.434 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.435 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.456 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.703 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.704 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.714 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.714 239942 INFO nova.compute.claims [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.880 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Jan 30 23:58:20 np0005603435 nova_compute[239938]: 2026-01-31 04:58:20.899 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Jan 30 23:58:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Jan 30 23:58:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:58:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269232844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.411 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.416 239942 DEBUG nova.compute.provider_tree [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:58:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/59741549' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/59741549' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.462 239942 DEBUG nova.scheduler.client.report [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.490 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.490 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 30 23:58:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 187 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.0 KiB/s wr, 55 op/s
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.545 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.545 239942 DEBUG nova.network.neutron [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.574 239942 INFO nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.594 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.711 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.714 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.715 239942 INFO nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Creating image(s)#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.750 239942 DEBUG nova.storage.rbd_utils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] rbd image e718387a-7f1c-476e-a53d-69bf63413c12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.783 239942 DEBUG nova.storage.rbd_utils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] rbd image e718387a-7f1c-476e-a53d-69bf63413c12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.806 239942 DEBUG nova.storage.rbd_utils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] rbd image e718387a-7f1c-476e-a53d-69bf63413c12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.809 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.826 239942 DEBUG nova.policy [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd1424589a4cc422c930f4c65f8538d1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5ae37c02aa74bf084cd851f4b233192', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.870 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.871 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.871 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.872 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.894 239942 DEBUG nova.storage.rbd_utils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] rbd image e718387a-7f1c-476e-a53d-69bf63413c12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.897 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 e718387a-7f1c-476e-a53d-69bf63413c12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:21 np0005603435 nova_compute[239938]: 2026-01-31 04:58:21.913 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.123 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 e718387a-7f1c-476e-a53d-69bf63413c12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.191 239942 DEBUG nova.storage.rbd_utils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] resizing rbd image e718387a-7f1c-476e-a53d-69bf63413c12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.270 239942 DEBUG nova.objects.instance [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'migration_context' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.286 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.286 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Ensure instance console log exists: /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.287 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.287 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.288 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:22 np0005603435 nova_compute[239938]: 2026-01-31 04:58:22.732 239942 DEBUG nova.network.neutron [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Successfully created port: 39e41855-7c54-477d-957b-aa769bd16f60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 30 23:58:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 200 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 950 KiB/s wr, 114 op/s
Jan 30 23:58:23 np0005603435 nova_compute[239938]: 2026-01-31 04:58:23.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.109 239942 DEBUG nova.network.neutron [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Successfully updated port: 39e41855-7c54-477d-957b-aa769bd16f60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.139 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.139 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquired lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.140 239942 DEBUG nova.network.neutron [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.261 239942 DEBUG nova.compute.manager [req-8b47d082-bf6b-49ab-9e89-dc18d42b9266 req-9ec53c60-c07a-453c-969c-f1c7a6ddd4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received event network-changed-39e41855-7c54-477d-957b-aa769bd16f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.261 239942 DEBUG nova.compute.manager [req-8b47d082-bf6b-49ab-9e89-dc18d42b9266 req-9ec53c60-c07a-453c-969c-f1c7a6ddd4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Refreshing instance network info cache due to event network-changed-39e41855-7c54-477d-957b-aa769bd16f60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.262 239942 DEBUG oslo_concurrency.lockutils [req-8b47d082-bf6b-49ab-9e89-dc18d42b9266 req-9ec53c60-c07a-453c-969c-f1c7a6ddd4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.301 239942 DEBUG nova.network.neutron [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.422 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:24 np0005603435 nova_compute[239938]: 2026-01-31 04:58:24.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.052 239942 DEBUG nova.network.neutron [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating instance_info_cache with network_info: [{"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.073 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Releasing lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.074 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Instance network_info: |[{"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.074 239942 DEBUG oslo_concurrency.lockutils [req-8b47d082-bf6b-49ab-9e89-dc18d42b9266 req-9ec53c60-c07a-453c-969c-f1c7a6ddd4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.075 239942 DEBUG nova.network.neutron [req-8b47d082-bf6b-49ab-9e89-dc18d42b9266 req-9ec53c60-c07a-453c-969c-f1c7a6ddd4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Refreshing network info cache for port 39e41855-7c54-477d-957b-aa769bd16f60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.079 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Start _get_guest_xml network_info=[{"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.085 239942 WARNING nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.090 239942 DEBUG nova.virt.libvirt.host [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.091 239942 DEBUG nova.virt.libvirt.host [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.100 239942 DEBUG nova.virt.libvirt.host [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.101 239942 DEBUG nova.virt.libvirt.host [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.101 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.102 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.103 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.103 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.103 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.104 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.104 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.105 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.105 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.106 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.106 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.107 239942 DEBUG nova.virt.hardware [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.111 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.298 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 217 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 1.2 MiB/s wr, 112 op/s
Jan 30 23:58:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:58:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945147244' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.630 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.655 239942 DEBUG nova.storage.rbd_utils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] rbd image e718387a-7f1c-476e-a53d-69bf63413c12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.659 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.889 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.909 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.909 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.910 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.910 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:58:25 np0005603435 nova_compute[239938]: 2026-01-31 04:58:25.911 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:58:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100818969' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.222 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.225 239942 DEBUG nova.virt.libvirt.vif [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-2059263885',display_name='tempest-SnapshotDataIntegrityTests-server-2059263885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-2059263885',id=25,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE2J6xl6hptAfBhj9MwPwWCmhY45b3CdZ5A/KSqFDwnfy73lo20B4Qjtjt+VnhVw51fanwz/3MNA+u3YW8BvStB65Bdfgg8zT2n0/Q1yWanzHWJwhqoA4bflv4fCMn1fQ==',key_name='tempest-keypair-154687870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5ae37c02aa74bf084cd851f4b233192',ramdisk_id='',reservation_id='r-j4m9vxl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-800856993',owner_user_name='tempest-SnapshotDataIntegrityTests-800856993-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:58:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d1424589a4cc422c930f4c65f8538d1a',uuid=e718387a-7f1c-476e-a53d-69bf63413c12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.226 239942 DEBUG nova.network.os_vif_util [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Converting VIF {"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.228 239942 DEBUG nova.network.os_vif_util [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:c2:ab,bridge_name='br-int',has_traffic_filtering=True,id=39e41855-7c54-477d-957b-aa769bd16f60,network=Network(dfbf01be-0e13-4ab0-b168-f61a3eca460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39e41855-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.230 239942 DEBUG nova.objects.instance [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'pci_devices' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.249 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] End _get_guest_xml xml=<domain type="kvm">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <uuid>e718387a-7f1c-476e-a53d-69bf63413c12</uuid>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <name>instance-00000019</name>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <metadata>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <nova:name>tempest-SnapshotDataIntegrityTests-server-2059263885</nova:name>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 04:58:25</nova:creationTime>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <nova:user uuid="d1424589a4cc422c930f4c65f8538d1a">tempest-SnapshotDataIntegrityTests-800856993-project-member</nova:user>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <nova:project uuid="f5ae37c02aa74bf084cd851f4b233192">tempest-SnapshotDataIntegrityTests-800856993</nova:project>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <nova:port uuid="39e41855-7c54-477d-957b-aa769bd16f60">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        </nova:port>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  </metadata>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <system>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <entry name="serial">e718387a-7f1c-476e-a53d-69bf63413c12</entry>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <entry name="uuid">e718387a-7f1c-476e-a53d-69bf63413c12</entry>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </system>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <os>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  </os>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <features>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <acpi/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <apic/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  </features>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  </clock>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  </cpu>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  <devices>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/e718387a-7f1c-476e-a53d-69bf63413c12_disk">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/e718387a-7f1c-476e-a53d-69bf63413c12_disk.config">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      </source>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      </auth>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </disk>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:7d:c2:ab"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <target dev="tap39e41855-7c"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </interface>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12/console.log" append="off"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </serial>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <video>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </video>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </rng>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 30 23:58:26 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:    </memballoon>
Jan 30 23:58:26 np0005603435 nova_compute[239938]:  </devices>
Jan 30 23:58:26 np0005603435 nova_compute[239938]: </domain>
Jan 30 23:58:26 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.259 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Preparing to wait for external event network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.259 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.260 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.261 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.263 239942 DEBUG nova.virt.libvirt.vif [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T04:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-2059263885',display_name='tempest-SnapshotDataIntegrityTests-server-2059263885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-2059263885',id=25,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE2J6xl6hptAfBhj9MwPwWCmhY45b3CdZ5A/KSqFDwnfy73lo20B4Qjtjt+VnhVw51fanwz/3MNA+u3YW8BvStB65Bdfgg8zT2n0/Q1yWanzHWJwhqoA4bflv4fCMn1fQ==',key_name='tempest-keypair-154687870',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5ae37c02aa74bf084cd851f4b233192',ramdisk_id='',reservation_id='r-j4m9vxl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-800856993',owner_user_name='tempest-SnapshotDataIntegrityTests-800856993-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T04:58:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d1424589a4cc422c930f4c65f8538d1a',uuid=e718387a-7f1c-476e-a53d-69bf63413c12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.264 239942 DEBUG nova.network.os_vif_util [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Converting VIF {"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.265 239942 DEBUG nova.network.os_vif_util [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:c2:ab,bridge_name='br-int',has_traffic_filtering=True,id=39e41855-7c54-477d-957b-aa769bd16f60,network=Network(dfbf01be-0e13-4ab0-b168-f61a3eca460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39e41855-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.266 239942 DEBUG os_vif [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:c2:ab,bridge_name='br-int',has_traffic_filtering=True,id=39e41855-7c54-477d-957b-aa769bd16f60,network=Network(dfbf01be-0e13-4ab0-b168-f61a3eca460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39e41855-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.268 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.269 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.270 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.275 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.275 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39e41855-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.276 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39e41855-7c, col_values=(('external_ids', {'iface-id': '39e41855-7c54-477d-957b-aa769bd16f60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:c2:ab', 'vm-uuid': 'e718387a-7f1c-476e-a53d-69bf63413c12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:26 np0005603435 NetworkManager[49097]: <info>  [1769835506.2801] manager: (tap39e41855-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.279 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.285 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.286 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.287 239942 INFO os_vif [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:c2:ab,bridge_name='br-int',has_traffic_filtering=True,id=39e41855-7c54-477d-957b-aa769bd16f60,network=Network(dfbf01be-0e13-4ab0-b168-f61a3eca460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39e41855-7c')#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.291 239942 DEBUG nova.network.neutron [req-8b47d082-bf6b-49ab-9e89-dc18d42b9266 req-9ec53c60-c07a-453c-969c-f1c7a6ddd4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updated VIF entry in instance network info cache for port 39e41855-7c54-477d-957b-aa769bd16f60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.292 239942 DEBUG nova.network.neutron [req-8b47d082-bf6b-49ab-9e89-dc18d42b9266 req-9ec53c60-c07a-453c-969c-f1c7a6ddd4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating instance_info_cache with network_info: [{"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.311 239942 DEBUG oslo_concurrency.lockutils [req-8b47d082-bf6b-49ab-9e89-dc18d42b9266 req-9ec53c60-c07a-453c-969c-f1c7a6ddd4da c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.344 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.345 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.345 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No VIF found with MAC fa:16:3e:7d:c2:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.346 239942 INFO nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Using config drive#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.380 239942 DEBUG nova.storage.rbd_utils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] rbd image e718387a-7f1c-476e-a53d-69bf63413c12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:58:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:58:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814620752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.478 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.582 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.583 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.590 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.590 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.596 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.596 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.796 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.797 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3957MB free_disk=59.979434478096664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.798 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.798 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.887 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance 5c1cf313-39cd-420b-98f1-026da341b273 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.887 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance a3d46698-1b04-4df5-a957-0ba432667ada actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.887 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance e718387a-7f1c-476e-a53d-69bf63413c12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.887 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.888 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:58:26 np0005603435 nova_compute[239938]: 2026-01-31 04:58:26.968 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 237 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 2.7 MiB/s wr, 123 op/s
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.505 239942 INFO nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Creating config drive at /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12/disk.config#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.509 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpatxkbe2e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:58:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294592479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.543 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.549 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.567 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.592 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.592 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.635 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpatxkbe2e" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.655 239942 DEBUG nova.storage.rbd_utils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] rbd image e718387a-7f1c-476e-a53d-69bf63413c12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.659 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12/disk.config e718387a-7f1c-476e-a53d-69bf63413c12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.785 239942 DEBUG oslo_concurrency.processutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12/disk.config e718387a-7f1c-476e-a53d-69bf63413c12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.786 239942 INFO nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Deleting local config drive /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12/disk.config because it was imported into RBD.#033[00m
Jan 30 23:58:27 np0005603435 kernel: tap39e41855-7c: entered promiscuous mode
Jan 30 23:58:27 np0005603435 NetworkManager[49097]: <info>  [1769835507.8348] manager: (tap39e41855-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Jan 30 23:58:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:27Z|00234|binding|INFO|Claiming lport 39e41855-7c54-477d-957b-aa769bd16f60 for this chassis.
Jan 30 23:58:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:27Z|00235|binding|INFO|39e41855-7c54-477d-957b-aa769bd16f60: Claiming fa:16:3e:7d:c2:ab 10.100.0.13
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.836 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.840 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.846 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:c2:ab 10.100.0.13'], port_security=['fa:16:3e:7d:c2:ab 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e718387a-7f1c-476e-a53d-69bf63413c12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfbf01be-0e13-4ab0-b168-f61a3eca460e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5ae37c02aa74bf084cd851f4b233192', 'neutron:revision_number': '2', 'neutron:security_group_ids': '14d701b1-eb59-4eaa-8423-1a8f9ada9f00 7504c8ae-803d-4af9-8341-c0a2007c947a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9447addc-5d26-4056-a129-d4a7951ac825, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=39e41855-7c54-477d-957b-aa769bd16f60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.848 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 39e41855-7c54-477d-957b-aa769bd16f60 in datapath dfbf01be-0e13-4ab0-b168-f61a3eca460e bound to our chassis#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.850 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfbf01be-0e13-4ab0-b168-f61a3eca460e#033[00m
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.850 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:27Z|00236|binding|INFO|Setting lport 39e41855-7c54-477d-957b-aa769bd16f60 ovn-installed in OVS
Jan 30 23:58:27 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:27Z|00237|binding|INFO|Setting lport 39e41855-7c54-477d-957b-aa769bd16f60 up in Southbound
Jan 30 23:58:27 np0005603435 nova_compute[239938]: 2026-01-31 04:58:27.855 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.859 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ffe26b-6fad-47a6-9d6f-3f731246cd39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.860 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdfbf01be-01 in ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.863 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdfbf01be-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.864 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c91446c6-2eec-40ce-b501-05402e66e0c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.865 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[d137961e-9e59-472c-9c41-0ee68057bff5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 systemd-machined[208030]: New machine qemu-25-instance-00000019.
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.874 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[e142cc89-e638-45d1-be47-1ed5be45da3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.882 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b2826096-0069-4741-9615-ab5f795e93ef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.901 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1ed11a-e25d-4569-a26e-960fdc1c3fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 systemd-udevd[269169]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:58:27 np0005603435 systemd-udevd[269171]: Network interface NamePolicy= disabled on kernel command line.
Jan 30 23:58:27 np0005603435 NetworkManager[49097]: <info>  [1769835507.9106] manager: (tapdfbf01be-00): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.911 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ff7554-f77c-4659-af3a-9e0acdc06115]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 NetworkManager[49097]: <info>  [1769835507.9154] device (tap39e41855-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 30 23:58:27 np0005603435 NetworkManager[49097]: <info>  [1769835507.9160] device (tap39e41855-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.937 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[0866bdfb-6a5b-4e47-9625-940279d0def5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.940 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[734f6ec9-4651-478b-8d3d-eb84600d2b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 NetworkManager[49097]: <info>  [1769835507.9638] device (tapdfbf01be-00): carrier: link connected
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.972 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae9bd46-e642-4625-83d3-4e7ea283d375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:27 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:27.991 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8e8f21-46f8-4e0e-b74e-a4c378fb8e25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfbf01be-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:fe:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456236, 'reachable_time': 26511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269197, 'error': None, 'target': 'ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.007 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[721f058c-e57c-401f-9aa2-5528e84e6bc9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:fe2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456236, 'tstamp': 456236}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269198, 'error': None, 'target': 'ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.022 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4be647-49dc-4a07-b88c-69cdd0782cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfbf01be-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:fe:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456236, 'reachable_time': 26511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269199, 'error': None, 'target': 'ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.048 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9924f7e7-32fe-48c1-84f9-79c9eb68f38d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.096 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[85ed0e1d-1ba7-4c25-9074-3a6a8d2ce335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.097 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfbf01be-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.098 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.098 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfbf01be-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.100 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:28 np0005603435 NetworkManager[49097]: <info>  [1769835508.1009] manager: (tapdfbf01be-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Jan 30 23:58:28 np0005603435 kernel: tapdfbf01be-00: entered promiscuous mode
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.103 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfbf01be-00, col_values=(('external_ids', {'iface-id': '66be847d-0d9f-4fd7-af2c-41561dc2a66f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:28 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:28Z|00238|binding|INFO|Releasing lport 66be847d-0d9f-4fd7-af2c-41561dc2a66f from this chassis (sb_readonly=0)
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.114 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.115 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dfbf01be-0e13-4ab0-b168-f61a3eca460e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dfbf01be-0e13-4ab0-b168-f61a3eca460e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.116 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[123cb0eb-06b1-4e8e-84f7-44a4bd5ec01a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.117 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: global
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-dfbf01be-0e13-4ab0-b168-f61a3eca460e
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/dfbf01be-0e13-4ab0-b168-f61a3eca460e.pid.haproxy
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID dfbf01be-0e13-4ab0-b168-f61a3eca460e
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 30 23:58:28 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:28.118 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e', 'env', 'PROCESS_TAG=haproxy-dfbf01be-0e13-4ab0-b168-f61a3eca460e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dfbf01be-0e13-4ab0-b168-f61a3eca460e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 30 23:58:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Jan 30 23:58:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Jan 30 23:58:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Jan 30 23:58:28 np0005603435 podman[269231]: 2026-01-31 04:58:28.4907555 +0000 UTC m=+0.058909759 container create ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 30 23:58:28 np0005603435 systemd[1]: Started libpod-conmon-ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4.scope.
Jan 30 23:58:28 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:58:28 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91e75a76da878aeba9bea785ac466d82798201c16c57075dbdbfb8150793f19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 30 23:58:28 np0005603435 podman[269231]: 2026-01-31 04:58:28.462429228 +0000 UTC m=+0.030583528 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 30 23:58:28 np0005603435 podman[269231]: 2026-01-31 04:58:28.566898719 +0000 UTC m=+0.135053068 container init ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 30 23:58:28 np0005603435 podman[269231]: 2026-01-31 04:58:28.574281949 +0000 UTC m=+0.142436238 container start ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.591 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.592 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:58:28 np0005603435 neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e[269286]: [NOTICE]   (269291) : New worker (269294) forked
Jan 30 23:58:28 np0005603435 neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e[269286]: [NOTICE]   (269291) : Loading success.
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.618 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835508.6181586, e718387a-7f1c-476e-a53d-69bf63413c12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.619 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] VM Started (Lifecycle Event)#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.639 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.643 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835508.6196754, e718387a-7f1c-476e-a53d-69bf63413c12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.643 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] VM Paused (Lifecycle Event)#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.661 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.664 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.683 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.717 239942 DEBUG nova.compute.manager [req-2215ce30-5aae-4c0a-bdc9-0d4599c119db req-767fb115-5008-4997-8278-d47550271184 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received event network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.717 239942 DEBUG oslo_concurrency.lockutils [req-2215ce30-5aae-4c0a-bdc9-0d4599c119db req-767fb115-5008-4997-8278-d47550271184 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.718 239942 DEBUG oslo_concurrency.lockutils [req-2215ce30-5aae-4c0a-bdc9-0d4599c119db req-767fb115-5008-4997-8278-d47550271184 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.718 239942 DEBUG oslo_concurrency.lockutils [req-2215ce30-5aae-4c0a-bdc9-0d4599c119db req-767fb115-5008-4997-8278-d47550271184 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.718 239942 DEBUG nova.compute.manager [req-2215ce30-5aae-4c0a-bdc9-0d4599c119db req-767fb115-5008-4997-8278-d47550271184 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Processing event network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.719 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.722 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835508.7223537, e718387a-7f1c-476e-a53d-69bf63413c12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.722 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] VM Resumed (Lifecycle Event)#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.724 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.726 239942 INFO nova.virt.libvirt.driver [-] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Instance spawned successfully.#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.727 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.746 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.749 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.759 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.759 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.759 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.760 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.760 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.761 239942 DEBUG nova.virt.libvirt.driver [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.798 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.818 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.818 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.819 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.833 239942 INFO nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Took 7.12 seconds to spawn the instance on the hypervisor.#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.833 239942 DEBUG nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.914 239942 INFO nova.compute.manager [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Took 8.39 seconds to build instance.#033[00m
Jan 30 23:58:28 np0005603435 nova_compute[239938]: 2026-01-31 04:58:28.934 239942 DEBUG oslo_concurrency.lockutils [None req-f0924209-eee4-45e1-adb1-a5aa1a848a25 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.308 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "a3d46698-1b04-4df5-a957-0ba432667ada" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.309 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.309 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.309 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.310 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.312 239942 INFO nova.compute.manager [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Terminating instance#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.313 239942 DEBUG nova.compute.manager [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:58:29 np0005603435 kernel: tapd8f56e56-02 (unregistering): left promiscuous mode
Jan 30 23:58:29 np0005603435 NetworkManager[49097]: <info>  [1769835509.3585] device (tapd8f56e56-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.374 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:29Z|00239|binding|INFO|Releasing lport d8f56e56-02d6-43e2-afae-1f5610a67fb9 from this chassis (sb_readonly=0)
Jan 30 23:58:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:29Z|00240|binding|INFO|Setting lport d8f56e56-02d6-43e2-afae-1f5610a67fb9 down in Southbound
Jan 30 23:58:29 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:29Z|00241|binding|INFO|Removing iface tapd8f56e56-02 ovn-installed in OVS
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.380 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.383 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:64:15 10.100.0.13'], port_security=['fa:16:3e:44:64:15 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a3d46698-1b04-4df5-a957-0ba432667ada', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5925722f-3c3e-42bd-9802-ef7105d62a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=d8f56e56-02d6-43e2-afae-1f5610a67fb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.386 156017 INFO neutron.agent.ovn.metadata.agent [-] Port d8f56e56-02d6-43e2-afae-1f5610a67fb9 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.389 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.393 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.405 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[84b5c7e4-8c53-4184-89df-f0abb1360ca9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:29 np0005603435 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 30 23:58:29 np0005603435 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 12.962s CPU time.
Jan 30 23:58:29 np0005603435 systemd-machined[208030]: Machine qemu-24-instance-00000018 terminated.
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.440 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1598bf-d26d-436d-ac4e-ae7602f0953d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.446 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[68c1c1d3-4430-4417-9bab-e0e7402825ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.483 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[45968acd-66fa-4817-bf1e-c07c511a2315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 237 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.505 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb22ede-95bc-4db1-8bba-485e50269002]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b0cf2db-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:f7:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447445, 'reachable_time': 41879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269312, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.528 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f2dd36-8f98-4db7-ba24-e1ae06e976b6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b0cf2db-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447458, 'tstamp': 447458}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269313, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b0cf2db-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447460, 'tstamp': 447460}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269313, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.531 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.534 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.540 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b0cf2db-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.540 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.541 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b0cf2db-20, col_values=(('external_ids', {'iface-id': '07e657c3-16d2-4095-9f39-32a275cb472e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.541 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.541 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.549 239942 INFO nova.virt.libvirt.driver [-] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Instance destroyed successfully.#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.550 239942 DEBUG nova.objects.instance [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'resources' on Instance uuid a3d46698-1b04-4df5-a957-0ba432667ada obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.571 239942 DEBUG nova.virt.libvirt.vif [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:57:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-92792047',display_name='tempest-TestVolumeBootPattern-server-92792047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-92792047',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:57:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-i88d4whl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:57:50Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=a3d46698-1b04-4df5-a957-0ba432667ada,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.571 239942 DEBUG nova.network.os_vif_util [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "address": "fa:16:3e:44:64:15", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f56e56-02", "ovs_interfaceid": "d8f56e56-02d6-43e2-afae-1f5610a67fb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.572 239942 DEBUG nova.network.os_vif_util [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:64:15,bridge_name='br-int',has_traffic_filtering=True,id=d8f56e56-02d6-43e2-afae-1f5610a67fb9,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f56e56-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.573 239942 DEBUG os_vif [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:64:15,bridge_name='br-int',has_traffic_filtering=True,id=d8f56e56-02d6-43e2-afae-1f5610a67fb9,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f56e56-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.576 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.576 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8f56e56-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.634 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.636 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.639 239942 INFO os_vif [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:64:15,bridge_name='br-int',has_traffic_filtering=True,id=d8f56e56-02d6-43e2-afae-1f5610a67fb9,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f56e56-02')#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.674 239942 DEBUG nova.compute.manager [req-07262870-4928-4b93-80c4-04b015a74cf7 req-da59301e-c6a1-4a35-a3b8-b260e0e05ae9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-vif-unplugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.674 239942 DEBUG oslo_concurrency.lockutils [req-07262870-4928-4b93-80c4-04b015a74cf7 req-da59301e-c6a1-4a35-a3b8-b260e0e05ae9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.675 239942 DEBUG oslo_concurrency.lockutils [req-07262870-4928-4b93-80c4-04b015a74cf7 req-da59301e-c6a1-4a35-a3b8-b260e0e05ae9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.676 239942 DEBUG oslo_concurrency.lockutils [req-07262870-4928-4b93-80c4-04b015a74cf7 req-da59301e-c6a1-4a35-a3b8-b260e0e05ae9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.677 239942 DEBUG nova.compute.manager [req-07262870-4928-4b93-80c4-04b015a74cf7 req-da59301e-c6a1-4a35-a3b8-b260e0e05ae9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] No waiting events found dispatching network-vif-unplugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.678 239942 DEBUG nova.compute.manager [req-07262870-4928-4b93-80c4-04b015a74cf7 req-da59301e-c6a1-4a35-a3b8-b260e0e05ae9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-vif-unplugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.819 239942 INFO nova.virt.libvirt.driver [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Deleting instance files /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada_del#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.820 239942 INFO nova.virt.libvirt.driver [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Deletion of /var/lib/nova/instances/a3d46698-1b04-4df5-a957-0ba432667ada_del complete#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.873 239942 INFO nova.compute.manager [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.874 239942 DEBUG oslo.service.loopingcall [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.875 239942 DEBUG nova.compute.manager [-] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.875 239942 DEBUG nova.network.neutron [-] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.975 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:58:29 np0005603435 nova_compute[239938]: 2026-01-31 04:58:29.975 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:29 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:29.977 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.300 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.306 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updating instance_info_cache with network_info: [{"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.329 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.330 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.330 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.556 239942 DEBUG nova.network.neutron [-] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.582 239942 INFO nova.compute.manager [-] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Took 0.71 seconds to deallocate network for instance.#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.620 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.815 239942 DEBUG nova.compute.manager [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received event network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.816 239942 DEBUG oslo_concurrency.lockutils [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.816 239942 DEBUG oslo_concurrency.lockutils [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.816 239942 DEBUG oslo_concurrency.lockutils [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.817 239942 DEBUG nova.compute.manager [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] No waiting events found dispatching network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.817 239942 WARNING nova.compute.manager [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received unexpected event network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 for instance with vm_state active and task_state None.#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.818 239942 DEBUG nova.compute.manager [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-changed-d8f56e56-02d6-43e2-afae-1f5610a67fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.818 239942 DEBUG nova.compute.manager [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Refreshing instance network info cache due to event network-changed-d8f56e56-02d6-43e2-afae-1f5610a67fb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.819 239942 DEBUG oslo_concurrency.lockutils [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.819 239942 DEBUG oslo_concurrency.lockutils [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.819 239942 DEBUG nova.network.neutron [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Refreshing network info cache for port d8f56e56-02d6-43e2-afae-1f5610a67fb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.822 239942 INFO nova.compute.manager [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Took 0.24 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.870 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.871 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.957 239942 DEBUG oslo_concurrency.processutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:30 np0005603435 nova_compute[239938]: 2026-01-31 04:58:30.974 239942 DEBUG nova.network.neutron [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 30 23:58:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:58:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275269989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.470 239942 DEBUG oslo_concurrency.processutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.477 239942 DEBUG nova.compute.provider_tree [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.497 239942 DEBUG nova.scheduler.client.report [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:58:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 237 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 476 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.516 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.539 239942 INFO nova.scheduler.client.report [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Deleted allocations for instance a3d46698-1b04-4df5-a957-0ba432667ada#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.594 239942 DEBUG oslo_concurrency.lockutils [None req-1b546c40-53f5-491a-bc2b-9f38590319c0 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.647 239942 DEBUG nova.network.neutron [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.663 239942 DEBUG oslo_concurrency.lockutils [req-6283923f-526a-4fb3-9785-e0e9cb387533 req-b7d061a6-40df-4a60-a664-3e4a04d8d1a9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-a3d46698-1b04-4df5-a957-0ba432667ada" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.742 239942 DEBUG nova.compute.manager [req-941c1fec-0b4c-4f28-b6dd-1af498454ff3 req-0e2136fc-68e6-475b-8f93-de2e2181bea6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.742 239942 DEBUG oslo_concurrency.lockutils [req-941c1fec-0b4c-4f28-b6dd-1af498454ff3 req-0e2136fc-68e6-475b-8f93-de2e2181bea6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.743 239942 DEBUG oslo_concurrency.lockutils [req-941c1fec-0b4c-4f28-b6dd-1af498454ff3 req-0e2136fc-68e6-475b-8f93-de2e2181bea6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.743 239942 DEBUG oslo_concurrency.lockutils [req-941c1fec-0b4c-4f28-b6dd-1af498454ff3 req-0e2136fc-68e6-475b-8f93-de2e2181bea6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "a3d46698-1b04-4df5-a957-0ba432667ada-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.743 239942 DEBUG nova.compute.manager [req-941c1fec-0b4c-4f28-b6dd-1af498454ff3 req-0e2136fc-68e6-475b-8f93-de2e2181bea6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] No waiting events found dispatching network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.743 239942 WARNING nova.compute.manager [req-941c1fec-0b4c-4f28-b6dd-1af498454ff3 req-0e2136fc-68e6-475b-8f93-de2e2181bea6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received unexpected event network-vif-plugged-d8f56e56-02d6-43e2-afae-1f5610a67fb9 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:58:31 np0005603435 nova_compute[239938]: 2026-01-31 04:58:31.744 239942 DEBUG nova.compute.manager [req-941c1fec-0b4c-4f28-b6dd-1af498454ff3 req-0e2136fc-68e6-475b-8f93-de2e2181bea6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Received event network-vif-deleted-d8f56e56-02d6-43e2-afae-1f5610a67fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/249938104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/249938104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:32 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:32.980 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/956410111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/956410111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 237 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 145 op/s
Jan 30 23:58:33 np0005603435 nova_compute[239938]: 2026-01-31 04:58:33.874 239942 DEBUG nova.compute.manager [req-14aa5977-d9eb-4352-b1d1-2c149f136941 req-8bb5c527-aca1-4c91-b8c6-f6805ab7062e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received event network-changed-39e41855-7c54-477d-957b-aa769bd16f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:33 np0005603435 nova_compute[239938]: 2026-01-31 04:58:33.874 239942 DEBUG nova.compute.manager [req-14aa5977-d9eb-4352-b1d1-2c149f136941 req-8bb5c527-aca1-4c91-b8c6-f6805ab7062e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Refreshing instance network info cache due to event network-changed-39e41855-7c54-477d-957b-aa769bd16f60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:58:33 np0005603435 nova_compute[239938]: 2026-01-31 04:58:33.874 239942 DEBUG oslo_concurrency.lockutils [req-14aa5977-d9eb-4352-b1d1-2c149f136941 req-8bb5c527-aca1-4c91-b8c6-f6805ab7062e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:58:33 np0005603435 nova_compute[239938]: 2026-01-31 04:58:33.875 239942 DEBUG oslo_concurrency.lockutils [req-14aa5977-d9eb-4352-b1d1-2c149f136941 req-8bb5c527-aca1-4c91-b8c6-f6805ab7062e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:58:33 np0005603435 nova_compute[239938]: 2026-01-31 04:58:33.875 239942 DEBUG nova.network.neutron [req-14aa5977-d9eb-4352-b1d1-2c149f136941 req-8bb5c527-aca1-4c91-b8c6-f6805ab7062e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Refreshing network info cache for port 39e41855-7c54-477d-957b-aa769bd16f60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:58:34 np0005603435 podman[269366]: 2026-01-31 04:58:34.11895511 +0000 UTC m=+0.073530536 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 30 23:58:34 np0005603435 podman[269367]: 2026-01-31 04:58:34.161893938 +0000 UTC m=+0.113895602 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:58:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Jan 30 23:58:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Jan 30 23:58:34 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Jan 30 23:58:34 np0005603435 nova_compute[239938]: 2026-01-31 04:58:34.629 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.355 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 226 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 23 KiB/s wr, 153 op/s
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.882 239942 DEBUG nova.compute.manager [req-648c29cd-9d59-412a-8728-9a93111e3db9 req-6bc1487d-17e3-4512-8a2c-5a4eb36a20d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received event network-changed-3ee2f2be-ab08-486b-9003-3c2f0b450b03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.882 239942 DEBUG nova.compute.manager [req-648c29cd-9d59-412a-8728-9a93111e3db9 req-6bc1487d-17e3-4512-8a2c-5a4eb36a20d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Refreshing instance network info cache due to event network-changed-3ee2f2be-ab08-486b-9003-3c2f0b450b03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.883 239942 DEBUG oslo_concurrency.lockutils [req-648c29cd-9d59-412a-8728-9a93111e3db9 req-6bc1487d-17e3-4512-8a2c-5a4eb36a20d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.883 239942 DEBUG oslo_concurrency.lockutils [req-648c29cd-9d59-412a-8728-9a93111e3db9 req-6bc1487d-17e3-4512-8a2c-5a4eb36a20d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.884 239942 DEBUG nova.network.neutron [req-648c29cd-9d59-412a-8728-9a93111e3db9 req-6bc1487d-17e3-4512-8a2c-5a4eb36a20d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Refreshing network info cache for port 3ee2f2be-ab08-486b-9003-3c2f0b450b03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.913 239942 DEBUG nova.network.neutron [req-14aa5977-d9eb-4352-b1d1-2c149f136941 req-8bb5c527-aca1-4c91-b8c6-f6805ab7062e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updated VIF entry in instance network info cache for port 39e41855-7c54-477d-957b-aa769bd16f60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.914 239942 DEBUG nova.network.neutron [req-14aa5977-d9eb-4352-b1d1-2c149f136941 req-8bb5c527-aca1-4c91-b8c6-f6805ab7062e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating instance_info_cache with network_info: [{"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:58:35 np0005603435 nova_compute[239938]: 2026-01-31 04:58:35.942 239942 DEBUG oslo_concurrency.lockutils [req-14aa5977-d9eb-4352-b1d1-2c149f136941 req-8bb5c527-aca1-4c91-b8c6-f6805ab7062e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.000 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "5c1cf313-39cd-420b-98f1-026da341b273" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.001 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.002 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "5c1cf313-39cd-420b-98f1-026da341b273-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.002 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.002 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.004 239942 INFO nova.compute.manager [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Terminating instance#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.006 239942 DEBUG nova.compute.manager [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:58:36 np0005603435 kernel: tap3ee2f2be-ab (unregistering): left promiscuous mode
Jan 30 23:58:36 np0005603435 NetworkManager[49097]: <info>  [1769835516.0609] device (tap3ee2f2be-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.070 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:36 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:36Z|00242|binding|INFO|Releasing lport 3ee2f2be-ab08-486b-9003-3c2f0b450b03 from this chassis (sb_readonly=0)
Jan 30 23:58:36 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:36Z|00243|binding|INFO|Setting lport 3ee2f2be-ab08-486b-9003-3c2f0b450b03 down in Southbound
Jan 30 23:58:36 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:36Z|00244|binding|INFO|Removing iface tap3ee2f2be-ab ovn-installed in OVS
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.074 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.083 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.087 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:18:ff 10.100.0.8'], port_security=['fa:16:3e:38:18:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5c1cf313-39cd-420b-98f1-026da341b273', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e332802dd6cf49c59f8ed38e70addb0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5925722f-3c3e-42bd-9802-ef7105d62a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02525b3d-5afa-441f-ab06-d5abe31dc4af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=3ee2f2be-ab08-486b-9003-3c2f0b450b03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.088 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 3ee2f2be-ab08-486b-9003-3c2f0b450b03 in datapath 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 unbound from our chassis#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.092 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.097 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc4448a-aec6-4940-9f59-bc9daf8c7f24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.098 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 namespace which is not needed anymore#033[00m
Jan 30 23:58:36 np0005603435 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 30 23:58:36 np0005603435 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 15.130s CPU time.
Jan 30 23:58:36 np0005603435 systemd-machined[208030]: Machine qemu-23-instance-00000017 terminated.
Jan 30 23:58:36 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[267673]: [NOTICE]   (267679) : haproxy version is 2.8.14-c23fe91
Jan 30 23:58:36 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[267673]: [NOTICE]   (267679) : path to executable is /usr/sbin/haproxy
Jan 30 23:58:36 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[267673]: [WARNING]  (267679) : Exiting Master process...
Jan 30 23:58:36 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[267673]: [ALERT]    (267679) : Current worker (267681) exited with code 143 (Terminated)
Jan 30 23:58:36 np0005603435 neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3[267673]: [WARNING]  (267679) : All workers exited. Exiting... (0)
Jan 30 23:58:36 np0005603435 systemd[1]: libpod-9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac.scope: Deactivated successfully.
Jan 30 23:58:36 np0005603435 podman[269435]: 2026-01-31 04:58:36.241582292 +0000 UTC m=+0.055313861 container died 9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.241 239942 INFO nova.virt.libvirt.driver [-] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Instance destroyed successfully.#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.242 239942 DEBUG nova.objects.instance [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lazy-loading 'resources' on Instance uuid 5c1cf313-39cd-420b-98f1-026da341b273 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.265 239942 DEBUG nova.virt.libvirt.vif [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:56:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-863134085',display_name='tempest-TestVolumeBootPattern-server-863134085',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-863134085',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMCGQWsMIpUReejiJa4LLn2uTMRcPNVUKy3r7lp0BAh1r0nLhjEfcHskPuueezEtVAWbrIlq/WV3PYQ0vKGreYOPxpY3Xnz3OjrpOhX/Q6AIWXZTJpS2jBEA3mt0kVgrg==',key_name='tempest-TestVolumeBootPattern-1354425942',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:57:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e332802dd6cf49c59f8ed38e70addb0e',ramdisk_id='',reservation_id='r-9txe0qqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1782423025',owner_user_name='tempest-TestVolumeBootPattern-1782423025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:57:00Z,user_data=None,user_id='e10f13b98624406985dec6a5dcc391c7',uuid=5c1cf313-39cd-420b-98f1-026da341b273,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.266 239942 DEBUG nova.network.os_vif_util [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converting VIF {"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.267 239942 DEBUG nova.network.os_vif_util [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:18:ff,bridge_name='br-int',has_traffic_filtering=True,id=3ee2f2be-ab08-486b-9003-3c2f0b450b03,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee2f2be-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.267 239942 DEBUG os_vif [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:18:ff,bridge_name='br-int',has_traffic_filtering=True,id=3ee2f2be-ab08-486b-9003-3c2f0b450b03,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee2f2be-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.268 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.269 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ee2f2be-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.270 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.273 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:58:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac-userdata-shm.mount: Deactivated successfully.
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.275 239942 INFO os_vif [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:18:ff,bridge_name='br-int',has_traffic_filtering=True,id=3ee2f2be-ab08-486b-9003-3c2f0b450b03,network=Network(5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ee2f2be-ab')#033[00m
Jan 30 23:58:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay-e9e232f73622aae3106fc8f0213a84e0d9c927b0df867c8265b435ace9039bcb-merged.mount: Deactivated successfully.
Jan 30 23:58:36 np0005603435 podman[269435]: 2026-01-31 04:58:36.288489438 +0000 UTC m=+0.102220997 container cleanup 9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 30 23:58:36 np0005603435 systemd[1]: libpod-conmon-9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac.scope: Deactivated successfully.
Jan 30 23:58:36 np0005603435 podman[269488]: 2026-01-31 04:58:36.366551894 +0000 UTC m=+0.055277401 container remove 9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.371 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c8dd7c14-f2e1-4f51-bef3-f0f9393f6c60]: (4, ('Sat Jan 31 04:58:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac)\n9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac\nSat Jan 31 04:58:36 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 (9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac)\n9d8f3ce002400f432c8424e0dbe121bada67ff9caaf03771a5893ee7a20e2bac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.374 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b472c2-4e32-4539-adef-92f9237db0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.376 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b0cf2db-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:58:36 np0005603435 kernel: tap5b0cf2db-20: left promiscuous mode
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.410 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.422 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6631f0b8-49b7-4c32-8e2e-77afb0db9b68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.436 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0e649c78-67d5-42a8-ba6a-8633832f2bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.442 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bed8f8e0-49ce-46e5-af02-7b503cdac709]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.458 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[87bc3413-30dc-45e9-a6a4-19f2c6c56811]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447437, 'reachable_time': 41727, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269507, 'error': None, 'target': 'ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:36 np0005603435 systemd[1]: run-netns-ovnmeta\x2d5b0cf2db\x2d2e35\x2d41fa\x2d9783\x2d30f0fe6ea7a3.mount: Deactivated successfully.
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.461 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:58:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:36.461 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a11d21-d8b5-4819-876f-43aeaaff233b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.489 239942 INFO nova.virt.libvirt.driver [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Deleting instance files /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273_del#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.490 239942 INFO nova.virt.libvirt.driver [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Deletion of /var/lib/nova/instances/5c1cf313-39cd-420b-98f1-026da341b273_del complete#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.545 239942 INFO nova.compute.manager [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.545 239942 DEBUG oslo.service.loopingcall [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.546 239942 DEBUG nova.compute.manager [-] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:58:36 np0005603435 nova_compute[239938]: 2026-01-31 04:58:36.546 239942 DEBUG nova.network.neutron [-] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:58:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:58:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:58:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:58:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:58:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:58:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.359 239942 DEBUG nova.network.neutron [-] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.375 239942 INFO nova.compute.manager [-] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Took 0.83 seconds to deallocate network for instance.#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.459 239942 DEBUG nova.compute.manager [req-8c14b5e6-248d-475d-ab72-262a434d2f24 req-cc1edc4e-a1c5-46ad-8a1c-d25d044da423 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received event network-vif-deleted-3ee2f2be-ab08-486b-9003-3c2f0b450b03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 215 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 22 KiB/s wr, 166 op/s
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.532 239942 DEBUG nova.network.neutron [req-648c29cd-9d59-412a-8728-9a93111e3db9 req-6bc1487d-17e3-4512-8a2c-5a4eb36a20d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updated VIF entry in instance network info cache for port 3ee2f2be-ab08-486b-9003-3c2f0b450b03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.533 239942 DEBUG nova.network.neutron [req-648c29cd-9d59-412a-8728-9a93111e3db9 req-6bc1487d-17e3-4512-8a2c-5a4eb36a20d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Updating instance_info_cache with network_info: [{"id": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "address": "fa:16:3e:38:18:ff", "network": {"id": "5b0cf2db-2e35-41fa-9783-30f0fe6ea7a3", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1899598029-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e332802dd6cf49c59f8ed38e70addb0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ee2f2be-ab", "ovs_interfaceid": "3ee2f2be-ab08-486b-9003-3c2f0b450b03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.562 239942 DEBUG oslo_concurrency.lockutils [req-648c29cd-9d59-412a-8728-9a93111e3db9 req-6bc1487d-17e3-4512-8a2c-5a4eb36a20d6 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-5c1cf313-39cd-420b-98f1-026da341b273" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.641 239942 INFO nova.compute.manager [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.689 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.690 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.777 239942 DEBUG oslo_concurrency.processutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.956 239942 DEBUG nova.compute.manager [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received event network-vif-unplugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.956 239942 DEBUG oslo_concurrency.lockutils [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "5c1cf313-39cd-420b-98f1-026da341b273-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.957 239942 DEBUG oslo_concurrency.lockutils [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.957 239942 DEBUG oslo_concurrency.lockutils [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.958 239942 DEBUG nova.compute.manager [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] No waiting events found dispatching network-vif-unplugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.958 239942 WARNING nova.compute.manager [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received unexpected event network-vif-unplugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.959 239942 DEBUG nova.compute.manager [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received event network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.959 239942 DEBUG oslo_concurrency.lockutils [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "5c1cf313-39cd-420b-98f1-026da341b273-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.959 239942 DEBUG oslo_concurrency.lockutils [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.960 239942 DEBUG oslo_concurrency.lockutils [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.960 239942 DEBUG nova.compute.manager [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] No waiting events found dispatching network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:58:37 np0005603435 nova_compute[239938]: 2026-01-31 04:58:37.961 239942 WARNING nova.compute.manager [req-8bf7572d-1f3c-4d17-bc12-8a9cf8edbe73 req-1f99e6c4-c1f1-403d-baf0-f1fc1bc24166 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Received unexpected event network-vif-plugged-3ee2f2be-ab08-486b-9003-3c2f0b450b03 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:58:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:58:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178443641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:58:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Jan 30 23:58:38 np0005603435 nova_compute[239938]: 2026-01-31 04:58:38.277 239942 DEBUG oslo_concurrency.processutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Jan 30 23:58:38 np0005603435 nova_compute[239938]: 2026-01-31 04:58:38.284 239942 DEBUG nova.compute.provider_tree [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:58:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Jan 30 23:58:38 np0005603435 nova_compute[239938]: 2026-01-31 04:58:38.300 239942 DEBUG nova.scheduler.client.report [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:58:38 np0005603435 nova_compute[239938]: 2026-01-31 04:58:38.324 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:38 np0005603435 nova_compute[239938]: 2026-01-31 04:58:38.365 239942 INFO nova.scheduler.client.report [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Deleted allocations for instance 5c1cf313-39cd-420b-98f1-026da341b273#033[00m
Jan 30 23:58:38 np0005603435 nova_compute[239938]: 2026-01-31 04:58:38.439 239942 DEBUG oslo_concurrency.lockutils [None req-f23a210a-0d53-4c16-8d74-af274242a4e6 e10f13b98624406985dec6a5dcc391c7 e332802dd6cf49c59f8ed38e70addb0e - - default default] Lock "5c1cf313-39cd-420b-98f1-026da341b273" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 215 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.0 KiB/s wr, 168 op/s
Jan 30 23:58:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Jan 30 23:58:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Jan 30 23:58:39 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Jan 30 23:58:39 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:39Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:c2:ab 10.100.0.13
Jan 30 23:58:39 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:39Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:c2:ab 10.100.0.13
Jan 30 23:58:40 np0005603435 nova_compute[239938]: 2026-01-31 04:58:40.357 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1014787322' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1014787322' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:41 np0005603435 nova_compute[239938]: 2026-01-31 04:58:41.316 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 217 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 679 KiB/s wr, 101 op/s
Jan 30 23:58:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Jan 30 23:58:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Jan 30 23:58:41 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Jan 30 23:58:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1440968122' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1440968122' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 180 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 654 KiB/s rd, 4.2 MiB/s wr, 187 op/s
Jan 30 23:58:44 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:44Z|00245|binding|INFO|Releasing lport 66be847d-0d9f-4fd7-af2c-41561dc2a66f from this chassis (sb_readonly=0)
Jan 30 23:58:44 np0005603435 nova_compute[239938]: 2026-01-31 04:58:44.467 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:44 np0005603435 nova_compute[239938]: 2026-01-31 04:58:44.548 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835509.5472121, a3d46698-1b04-4df5-a957-0ba432667ada => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:58:44 np0005603435 nova_compute[239938]: 2026-01-31 04:58:44.549 239942 INFO nova.compute.manager [-] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:58:44 np0005603435 nova_compute[239938]: 2026-01-31 04:58:44.568 239942 DEBUG nova.compute.manager [None req-eeb28766-d069-457c-a893-47f42794c96a - - - - - -] [instance: a3d46698-1b04-4df5-a957-0ba432667ada] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:58:45 np0005603435 nova_compute[239938]: 2026-01-31 04:58:45.360 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 163 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 559 KiB/s rd, 3.5 MiB/s wr, 169 op/s
Jan 30 23:58:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Jan 30 23:58:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Jan 30 23:58:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Jan 30 23:58:46 np0005603435 nova_compute[239938]: 2026-01-31 04:58:46.318 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 574 KiB/s rd, 3.3 MiB/s wr, 205 op/s
Jan 30 23:58:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Jan 30 23:58:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Jan 30 23:58:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Jan 30 23:58:48 np0005603435 ovn_controller[145670]: 2026-01-31T04:58:48Z|00246|binding|INFO|Releasing lport 66be847d-0d9f-4fd7-af2c-41561dc2a66f from this chassis (sb_readonly=0)
Jan 30 23:58:48 np0005603435 nova_compute[239938]: 2026-01-31 04:58:48.262 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Jan 30 23:58:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Jan 30 23:58:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Jan 30 23:58:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1440009447' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1440009447' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 43 KiB/s wr, 82 op/s
Jan 30 23:58:50 np0005603435 nova_compute[239938]: 2026-01-31 04:58:50.361 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:50 np0005603435 nova_compute[239938]: 2026-01-31 04:58:50.534 239942 DEBUG oslo_concurrency.lockutils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:50 np0005603435 nova_compute[239938]: 2026-01-31 04:58:50.534 239942 DEBUG oslo_concurrency.lockutils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:50 np0005603435 nova_compute[239938]: 2026-01-31 04:58:50.550 239942 DEBUG nova.objects.instance [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:58:50 np0005603435 nova_compute[239938]: 2026-01-31 04:58:50.584 239942 DEBUG oslo_concurrency.lockutils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:50 np0005603435 nova_compute[239938]: 2026-01-31 04:58:50.988 239942 DEBUG oslo_concurrency.lockutils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:50 np0005603435 nova_compute[239938]: 2026-01-31 04:58:50.989 239942 DEBUG oslo_concurrency.lockutils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:50 np0005603435 nova_compute[239938]: 2026-01-31 04:58:50.989 239942 INFO nova.compute.manager [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attaching volume a58099bc-74ed-42b7-b4fb-9410a7d65128 to /dev/vdb#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.208 239942 DEBUG os_brick.utils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.209 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.223 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.223 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[aef58a7c-b56d-4861-ab7e-be4258264cb6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.226 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.235 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.236 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[6cae6eab-ddfe-44a2-a4de-a5e07a9b37a3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.237 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.239 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835516.237729, 5c1cf313-39cd-420b-98f1-026da341b273 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.239 239942 INFO nova.compute.manager [-] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.246 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.246 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdf2c95-9f5f-4dfe-acd8-b141a2b23f3f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.247 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0b3abe-050a-45e0-a00e-d9217c161d18]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.248 239942 DEBUG oslo_concurrency.processutils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.273 239942 DEBUG nova.compute.manager [None req-ea2d9971-bc7d-4a2f-a478-edf414ac750b - - - - - -] [instance: 5c1cf313-39cd-420b-98f1-026da341b273] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.276 239942 DEBUG oslo_concurrency.processutils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.279 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.280 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.280 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.281 239942 DEBUG os_brick.utils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.281 239942 DEBUG nova.virt.block_device [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating existing volume attachment record: 5e2df284-2ed2-4143-9ebc-8fc9d251e92d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:58:51 np0005603435 nova_compute[239938]: 2026-01-31 04:58:51.320 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 37 KiB/s wr, 73 op/s
Jan 30 23:58:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Jan 30 23:58:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Jan 30 23:58:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Jan 30 23:58:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:58:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1249666726' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:58:52 np0005603435 nova_compute[239938]: 2026-01-31 04:58:52.148 239942 DEBUG nova.objects.instance [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:58:52 np0005603435 nova_compute[239938]: 2026-01-31 04:58:52.170 239942 DEBUG nova.virt.libvirt.driver [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to attach volume a58099bc-74ed-42b7-b4fb-9410a7d65128 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:58:52 np0005603435 nova_compute[239938]: 2026-01-31 04:58:52.172 239942 DEBUG nova.virt.libvirt.guest [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:58:52 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:58:52 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-a58099bc-74ed-42b7-b4fb-9410a7d65128">
Jan 30 23:58:52 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:58:52 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:58:52 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:58:52 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:58:52 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:58:52 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:58:52 np0005603435 nova_compute[239938]:  <serial>a58099bc-74ed-42b7-b4fb-9410a7d65128</serial>
Jan 30 23:58:52 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:58:52 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:58:52 np0005603435 nova_compute[239938]: 2026-01-31 04:58:52.344 239942 DEBUG nova.virt.libvirt.driver [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:58:52 np0005603435 nova_compute[239938]: 2026-01-31 04:58:52.344 239942 DEBUG nova.virt.libvirt.driver [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:58:52 np0005603435 nova_compute[239938]: 2026-01-31 04:58:52.345 239942 DEBUG nova.virt.libvirt.driver [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:58:52 np0005603435 nova_compute[239938]: 2026-01-31 04:58:52.345 239942 DEBUG nova.virt.libvirt.driver [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No VIF found with MAC fa:16:3e:7d:c2:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:58:52 np0005603435 nova_compute[239938]: 2026-01-31 04:58:52.713 239942 DEBUG oslo_concurrency.lockutils [None req-6fb3b21e-d3b2-4c3a-8dff-a58d1a4d1474 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Jan 30 23:58:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Jan 30 23:58:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Jan 30 23:58:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 167 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 30 23:58:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:58:53 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 25K writes, 88K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 9106 syncs, 2.76 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 42K keys, 12K commit groups, 1.0 writes per commit group, ingest: 33.88 MB, 0.06 MB/s#012Interval WAL: 12K writes, 5442 syncs, 2.37 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 30 23:58:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/19813957' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/19813957' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Jan 30 23:58:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Jan 30 23:58:54 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Jan 30 23:58:55 np0005603435 nova_compute[239938]: 2026-01-31 04:58:55.406 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 168 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 21 KiB/s wr, 51 op/s
Jan 30 23:58:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Jan 30 23:58:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Jan 30 23:58:55 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Jan 30 23:58:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:55.923 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:58:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:55.923 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:58:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:58:55.923 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:58:56 np0005603435 nova_compute[239938]: 2026-01-31 04:58:56.323 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:58:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2353349043' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:56 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2353349043' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Jan 30 23:58:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Jan 30 23:58:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Jan 30 23:58:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 171 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 216 KiB/s wr, 183 op/s
Jan 30 23:58:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Jan 30 23:58:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Jan 30 23:58:57 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Jan 30 23:58:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2239294800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:58 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2239294800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:58:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Jan 30 23:58:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Jan 30 23:58:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Jan 30 23:58:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:58:59 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.7 total, 600.0 interval#012Cumulative writes: 26K writes, 94K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 26K writes, 9437 syncs, 2.83 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 38K keys, 12K commit groups, 1.0 writes per commit group, ingest: 28.12 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5299 syncs, 2.32 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/992898762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/992898762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 171 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 228 KiB/s wr, 213 op/s
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2180645968' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2180645968' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Jan 30 23:58:59 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Jan 30 23:59:00 np0005603435 nova_compute[239938]: 2026-01-31 04:59:00.450 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:00 np0005603435 nova_compute[239938]: 2026-01-31 04:59:00.786 239942 DEBUG oslo_concurrency.lockutils [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:00 np0005603435 nova_compute[239938]: 2026-01-31 04:59:00.786 239942 DEBUG oslo_concurrency.lockutils [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/338390030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:00 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:00 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/338390030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:00 np0005603435 nova_compute[239938]: 2026-01-31 04:59:00.822 239942 INFO nova.compute.manager [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Detaching volume a58099bc-74ed-42b7-b4fb-9410a7d65128#033[00m
Jan 30 23:59:00 np0005603435 nova_compute[239938]: 2026-01-31 04:59:00.981 239942 INFO nova.virt.block_device [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to driver detach volume a58099bc-74ed-42b7-b4fb-9410a7d65128 from mountpoint /dev/vdb#033[00m
Jan 30 23:59:00 np0005603435 nova_compute[239938]: 2026-01-31 04:59:00.992 239942 DEBUG nova.virt.libvirt.driver [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Attempting to detach device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:59:00 np0005603435 nova_compute[239938]: 2026-01-31 04:59:00.993 239942 DEBUG nova.virt.libvirt.guest [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:59:00 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:00 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-a58099bc-74ed-42b7-b4fb-9410a7d65128">
Jan 30 23:59:00 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:00 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:00 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:00 np0005603435 nova_compute[239938]:  <serial>a58099bc-74ed-42b7-b4fb-9410a7d65128</serial>
Jan 30 23:59:00 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:59:00 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:00 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.004 239942 INFO nova.virt.libvirt.driver [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully detached device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the persistent domain config.#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.005 239942 DEBUG nova.virt.libvirt.driver [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.006 239942 DEBUG nova.virt.libvirt.guest [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:59:01 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:01 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-a58099bc-74ed-42b7-b4fb-9410a7d65128">
Jan 30 23:59:01 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:01 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:01 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:01 np0005603435 nova_compute[239938]:  <serial>a58099bc-74ed-42b7-b4fb-9410a7d65128</serial>
Jan 30 23:59:01 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:59:01 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:01 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.129 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835541.1286507, e718387a-7f1c-476e-a53d-69bf63413c12 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.130 239942 DEBUG nova.virt.libvirt.driver [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e718387a-7f1c-476e-a53d-69bf63413c12 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.133 239942 INFO nova.virt.libvirt.driver [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully detached device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the live domain config.#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.315 239942 DEBUG nova.objects.instance [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.326 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:01 np0005603435 nova_compute[239938]: 2026-01-31 04:59:01.354 239942 DEBUG oslo_concurrency.lockutils [None req-dbc88dac-b394-4739-bac5-f8937c88e887 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 171 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 58 KiB/s wr, 136 op/s
Jan 30 23:59:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/952772266' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/952772266' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Jan 30 23:59:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Jan 30 23:59:02 np0005603435 podman[269708]: 2026-01-31 04:59:02.98565589 +0000 UTC m=+0.058212202 container create f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elion, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:59:03 np0005603435 systemd[1]: Started libpod-conmon-f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c.scope.
Jan 30 23:59:03 np0005603435 podman[269708]: 2026-01-31 04:59:02.961880829 +0000 UTC m=+0.034437211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:59:03 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:59:03 np0005603435 podman[269708]: 2026-01-31 04:59:03.075800711 +0000 UTC m=+0.148357003 container init f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 30 23:59:03 np0005603435 podman[269708]: 2026-01-31 04:59:03.085444386 +0000 UTC m=+0.158000668 container start f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:59:03 np0005603435 podman[269708]: 2026-01-31 04:59:03.0892992 +0000 UTC m=+0.161855472 container attach f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elion, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 30 23:59:03 np0005603435 confident_elion[269724]: 167 167
Jan 30 23:59:03 np0005603435 systemd[1]: libpod-f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c.scope: Deactivated successfully.
Jan 30 23:59:03 np0005603435 podman[269708]: 2026-01-31 04:59:03.092806876 +0000 UTC m=+0.165363158 container died f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elion, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 30 23:59:03 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f8cebb595e3505efca20f72435f17e94ecae340e3996d69836a7e68c4c2ab4fd-merged.mount: Deactivated successfully.
Jan 30 23:59:03 np0005603435 podman[269708]: 2026-01-31 04:59:03.129766528 +0000 UTC m=+0.202322810 container remove f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elion, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:59:03 np0005603435 systemd[1]: libpod-conmon-f668f534b06b6bed9d66c00c9094ebfa88b6c4b70da770c87bdfc3dcf6c7147c.scope: Deactivated successfully.
Jan 30 23:59:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e445 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:03 np0005603435 podman[269749]: 2026-01-31 04:59:03.330180291 +0000 UTC m=+0.060471137 container create 178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 30 23:59:03 np0005603435 systemd[1]: Started libpod-conmon-178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108.scope.
Jan 30 23:59:03 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:59:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb67395a62fcde22949e13a16fd1e873dc890fa41aef6d59612804ade6f3789/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb67395a62fcde22949e13a16fd1e873dc890fa41aef6d59612804ade6f3789/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb67395a62fcde22949e13a16fd1e873dc890fa41aef6d59612804ade6f3789/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb67395a62fcde22949e13a16fd1e873dc890fa41aef6d59612804ade6f3789/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:03 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb67395a62fcde22949e13a16fd1e873dc890fa41aef6d59612804ade6f3789/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:03 np0005603435 podman[269749]: 2026-01-31 04:59:03.305272143 +0000 UTC m=+0.035563059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:59:03 np0005603435 podman[269749]: 2026-01-31 04:59:03.40385942 +0000 UTC m=+0.134150256 container init 178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 30 23:59:03 np0005603435 podman[269749]: 2026-01-31 04:59:03.414711335 +0000 UTC m=+0.145002141 container start 178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_yonath, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 30 23:59:03 np0005603435 podman[269749]: 2026-01-31 04:59:03.418334104 +0000 UTC m=+0.148624960 container attach 178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 30 23:59:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 172 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 60 KiB/s wr, 232 op/s
Jan 30 23:59:03 np0005603435 eloquent_yonath[269765]: --> passed data devices: 0 physical, 3 LVM
Jan 30 23:59:03 np0005603435 eloquent_yonath[269765]: --> All data devices are unavailable
Jan 30 23:59:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Jan 30 23:59:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Jan 30 23:59:03 np0005603435 systemd[1]: libpod-178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108.scope: Deactivated successfully.
Jan 30 23:59:03 np0005603435 podman[269749]: 2026-01-31 04:59:03.906282827 +0000 UTC m=+0.636573663 container died 178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_yonath, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 30 23:59:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Jan 30 23:59:03 np0005603435 systemd[1]: var-lib-containers-storage-overlay-bcb67395a62fcde22949e13a16fd1e873dc890fa41aef6d59612804ade6f3789-merged.mount: Deactivated successfully.
Jan 30 23:59:03 np0005603435 podman[269749]: 2026-01-31 04:59:03.958992594 +0000 UTC m=+0.689283410 container remove 178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_yonath, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:59:03 np0005603435 systemd[1]: libpod-conmon-178d85eb369986f0817ab258879b38d15e07e12348d7a4630039cc60f3cec108.scope: Deactivated successfully.
Jan 30 23:59:03 np0005603435 nova_compute[239938]: 2026-01-31 04:59:03.978 239942 DEBUG oslo_concurrency.lockutils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:03 np0005603435 nova_compute[239938]: 2026-01-31 04:59:03.980 239942 DEBUG oslo_concurrency.lockutils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:03 np0005603435 nova_compute[239938]: 2026-01-31 04:59:03.995 239942 DEBUG nova.objects.instance [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.026 239942 DEBUG oslo_concurrency.lockutils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:04 np0005603435 podman[269847]: 2026-01-31 04:59:04.225794048 +0000 UTC m=+0.065822818 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.256 239942 DEBUG oslo_concurrency.lockutils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.257 239942 DEBUG oslo_concurrency.lockutils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.257 239942 INFO nova.compute.manager [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attaching volume a5737f0c-c356-4013-9822-ddd3c9ecef41 to /dev/vdb#033[00m
Jan 30 23:59:04 np0005603435 podman[269867]: 2026-01-31 04:59:04.364365711 +0000 UTC m=+0.111428042 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.418 239942 DEBUG os_brick.utils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.419 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:04 np0005603435 podman[269906]: 2026-01-31 04:59:04.424092249 +0000 UTC m=+0.055606308 container create 368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.434 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.435 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5d33e0-0b56-48c1-b7ad-dbad137a1b77]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.436 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.445 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.445 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[2471fa83-668c-4d91-814a-d53909c83096]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.447 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.455 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.456 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[ec64b669-21fa-4cab-8805-ae6095d91b34]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.457 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[6f663fce-3837-454c-861a-3f5258a2530a]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.457 239942 DEBUG oslo_concurrency.processutils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:04 np0005603435 systemd[1]: Started libpod-conmon-368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b.scope.
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.478 239942 DEBUG oslo_concurrency.processutils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.483 239942 DEBUG os_brick.initiator.connectors.lightos [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.483 239942 DEBUG os_brick.initiator.connectors.lightos [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.484 239942 DEBUG os_brick.initiator.connectors.lightos [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.485 239942 DEBUG os_brick.utils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:59:04 np0005603435 nova_compute[239938]: 2026-01-31 04:59:04.485 239942 DEBUG nova.virt.block_device [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating existing volume attachment record: f32a9eba-c8b0-490d-9805-eba2ca495216 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:59:04 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:59:04 np0005603435 podman[269906]: 2026-01-31 04:59:04.406067919 +0000 UTC m=+0.037582008 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:59:04 np0005603435 podman[269906]: 2026-01-31 04:59:04.512783284 +0000 UTC m=+0.144297373 container init 368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tharp, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:59:04 np0005603435 podman[269906]: 2026-01-31 04:59:04.521302462 +0000 UTC m=+0.152816551 container start 368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 30 23:59:04 np0005603435 podman[269906]: 2026-01-31 04:59:04.524806297 +0000 UTC m=+0.156320366 container attach 368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 30 23:59:04 np0005603435 great_tharp[269929]: 167 167
Jan 30 23:59:04 np0005603435 systemd[1]: libpod-368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b.scope: Deactivated successfully.
Jan 30 23:59:04 np0005603435 conmon[269929]: conmon 368733825aa3aee51bda <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b.scope/container/memory.events
Jan 30 23:59:04 np0005603435 podman[269906]: 2026-01-31 04:59:04.529248566 +0000 UTC m=+0.160762635 container died 368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tharp, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 30 23:59:04 np0005603435 systemd[1]: var-lib-containers-storage-overlay-edbe82264a7040d0f9241272347217f048edca55a6234cdb1196002d7764eb2d-merged.mount: Deactivated successfully.
Jan 30 23:59:04 np0005603435 podman[269906]: 2026-01-31 04:59:04.566344111 +0000 UTC m=+0.197858160 container remove 368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 30 23:59:04 np0005603435 systemd[1]: libpod-conmon-368733825aa3aee51bdaca3347be9981d5a612c0fc4137dea9c0be495f94585b.scope: Deactivated successfully.
Jan 30 23:59:04 np0005603435 podman[269951]: 2026-01-31 04:59:04.74447752 +0000 UTC m=+0.047072870 container create a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wing, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 30 23:59:04 np0005603435 systemd[1]: Started libpod-conmon-a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd.scope.
Jan 30 23:59:04 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:59:04 np0005603435 podman[269951]: 2026-01-31 04:59:04.728405548 +0000 UTC m=+0.031000918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:59:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0101b01126a79789fd1d3f3be32e33f72f1b639d3b757b64f49d1428c49a135/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0101b01126a79789fd1d3f3be32e33f72f1b639d3b757b64f49d1428c49a135/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0101b01126a79789fd1d3f3be32e33f72f1b639d3b757b64f49d1428c49a135/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:04 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0101b01126a79789fd1d3f3be32e33f72f1b639d3b757b64f49d1428c49a135/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:04 np0005603435 podman[269951]: 2026-01-31 04:59:04.844541863 +0000 UTC m=+0.147137313 container init a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 30 23:59:04 np0005603435 podman[269951]: 2026-01-31 04:59:04.851850922 +0000 UTC m=+0.154446262 container start a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wing, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:59:04 np0005603435 podman[269951]: 2026-01-31 04:59:04.8566881 +0000 UTC m=+0.159283540 container attach a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:59:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Jan 30 23:59:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Jan 30 23:59:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Jan 30 23:59:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 30 23:59:05 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2401.1 total, 600.0 interval#012Cumulative writes: 19K writes, 75K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 6579 syncs, 2.95 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9480 writes, 34K keys, 9480 commit groups, 1.0 writes per commit group, ingest: 32.94 MB, 0.05 MB/s#012Interval WAL: 9480 writes, 3925 syncs, 2.42 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 30 23:59:05 np0005603435 fervent_wing[269967]: {
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:    "0": [
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:        {
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "devices": [
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "/dev/loop3"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            ],
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_name": "ceph_lv0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_size": "21470642176",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "name": "ceph_lv0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "tags": {
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cluster_name": "ceph",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.crush_device_class": "",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.encrypted": "0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.objectstore": "bluestore",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osd_id": "0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.type": "block",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.vdo": "0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.with_tpm": "0"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            },
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "type": "block",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "vg_name": "ceph_vg0"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:        }
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:    ],
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:    "1": [
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:        {
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "devices": [
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "/dev/loop4"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            ],
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_name": "ceph_lv1",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_size": "21470642176",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "name": "ceph_lv1",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "tags": {
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cluster_name": "ceph",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.crush_device_class": "",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.encrypted": "0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.objectstore": "bluestore",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osd_id": "1",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.type": "block",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.vdo": "0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.with_tpm": "0"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            },
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "type": "block",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "vg_name": "ceph_vg1"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:        }
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:    ],
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:    "2": [
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:        {
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "devices": [
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "/dev/loop5"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            ],
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_name": "ceph_lv2",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_size": "21470642176",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "name": "ceph_lv2",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "tags": {
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cephx_lockbox_secret": "",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.cluster_name": "ceph",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.crush_device_class": "",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.encrypted": "0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.objectstore": "bluestore",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osd_id": "2",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.type": "block",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.vdo": "0",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:                "ceph.with_tpm": "0"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            },
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "type": "block",
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:            "vg_name": "ceph_vg2"
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:        }
Jan 30 23:59:05 np0005603435 fervent_wing[269967]:    ]
Jan 30 23:59:05 np0005603435 fervent_wing[269967]: }
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3027884347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:59:05 np0005603435 systemd[1]: libpod-a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd.scope: Deactivated successfully.
Jan 30 23:59:05 np0005603435 podman[269951]: 2026-01-31 04:59:05.229941103 +0000 UTC m=+0.532536523 container died a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wing, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 30 23:59:05 np0005603435 systemd[1]: var-lib-containers-storage-overlay-b0101b01126a79789fd1d3f3be32e33f72f1b639d3b757b64f49d1428c49a135-merged.mount: Deactivated successfully.
Jan 30 23:59:05 np0005603435 podman[269951]: 2026-01-31 04:59:05.278327594 +0000 UTC m=+0.580922944 container remove a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.290 239942 DEBUG nova.objects.instance [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:05 np0005603435 systemd[1]: libpod-conmon-a43b8e1b97aea7d57c8c76ca6b9db210ae8106d17a2c8443610cc88e48ea72fd.scope: Deactivated successfully.
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.320 239942 DEBUG nova.virt.libvirt.driver [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to attach volume a5737f0c-c356-4013-9822-ddd3c9ecef41 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.325 239942 DEBUG nova.virt.libvirt.guest [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:59:05 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:05 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-a5737f0c-c356-4013-9822-ddd3c9ecef41">
Jan 30 23:59:05 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:05 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:05 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:59:05 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:59:05 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:59:05 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:05 np0005603435 nova_compute[239938]:  <serial>a5737f0c-c356-4013-9822-ddd3c9ecef41</serial>
Jan 30 23:59:05 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:05 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.442 239942 DEBUG nova.virt.libvirt.driver [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.443 239942 DEBUG nova.virt.libvirt.driver [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.443 239942 DEBUG nova.virt.libvirt.driver [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.443 239942 DEBUG nova.virt.libvirt.driver [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No VIF found with MAC fa:16:3e:7d:c2:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.451 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 172 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 60 KiB/s wr, 244 op/s
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2392716340' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2392716340' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:05 np0005603435 nova_compute[239938]: 2026-01-31 04:59:05.652 239942 DEBUG oslo_concurrency.lockutils [None req-e66b62f8-1de2-4e40-836d-bdfebc35dd24 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:05 np0005603435 podman[270071]: 2026-01-31 04:59:05.710817553 +0000 UTC m=+0.051951969 container create 94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 30 23:59:05 np0005603435 systemd[1]: Started libpod-conmon-94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2.scope.
Jan 30 23:59:05 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:59:05 np0005603435 podman[270071]: 2026-01-31 04:59:05.686778926 +0000 UTC m=+0.027913392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:59:05 np0005603435 podman[270071]: 2026-01-31 04:59:05.789083544 +0000 UTC m=+0.130217940 container init 94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 30 23:59:05 np0005603435 podman[270071]: 2026-01-31 04:59:05.794492846 +0000 UTC m=+0.135627222 container start 94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 30 23:59:05 np0005603435 podman[270071]: 2026-01-31 04:59:05.797239153 +0000 UTC m=+0.138373549 container attach 94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 30 23:59:05 np0005603435 gallant_williams[270088]: 167 167
Jan 30 23:59:05 np0005603435 systemd[1]: libpod-94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2.scope: Deactivated successfully.
Jan 30 23:59:05 np0005603435 podman[270071]: 2026-01-31 04:59:05.801450896 +0000 UTC m=+0.142585282 container died 94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:59:05 np0005603435 systemd[1]: var-lib-containers-storage-overlay-52b917b38f72896b4d7794ea91901b556499415221b3cfd21aaa3695a048a377-merged.mount: Deactivated successfully.
Jan 30 23:59:05 np0005603435 podman[270071]: 2026-01-31 04:59:05.837336222 +0000 UTC m=+0.178470598 container remove 94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 30 23:59:05 np0005603435 systemd[1]: libpod-conmon-94878e5e60ae99c38c2db935604ad5a223670c2453de018790eba9f097cbeda2.scope: Deactivated successfully.
Jan 30 23:59:05 np0005603435 podman[270112]: 2026-01-31 04:59:05.973011355 +0000 UTC m=+0.044764174 container create 3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/147800526' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:05 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/147800526' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:06 np0005603435 systemd[1]: Started libpod-conmon-3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93.scope.
Jan 30 23:59:06 np0005603435 systemd[1]: Started libcrun container.
Jan 30 23:59:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e977aebae9ad73c47869d72c6ccddb514a2b543bba43d5d9e39d239a857e335/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e977aebae9ad73c47869d72c6ccddb514a2b543bba43d5d9e39d239a857e335/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e977aebae9ad73c47869d72c6ccddb514a2b543bba43d5d9e39d239a857e335/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:06 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e977aebae9ad73c47869d72c6ccddb514a2b543bba43d5d9e39d239a857e335/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 30 23:59:06 np0005603435 podman[270112]: 2026-01-31 04:59:05.953520269 +0000 UTC m=+0.025273108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 30 23:59:06 np0005603435 podman[270112]: 2026-01-31 04:59:06.048362725 +0000 UTC m=+0.120115564 container init 3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_meninsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 30 23:59:06 np0005603435 podman[270112]: 2026-01-31 04:59:06.054991906 +0000 UTC m=+0.126744765 container start 3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_meninsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 30 23:59:06 np0005603435 podman[270112]: 2026-01-31 04:59:06.059144218 +0000 UTC m=+0.130897057 container attach 3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 30 23:59:06 np0005603435 nova_compute[239938]: 2026-01-31 04:59:06.328 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_04:59:06
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'images', 'backups', '.rgw.root', 'default.rgw.meta']
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 30 23:59:06 np0005603435 lvm[270205]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 30 23:59:06 np0005603435 lvm[270205]: VG ceph_vg0 finished
Jan 30 23:59:06 np0005603435 lvm[270207]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 30 23:59:06 np0005603435 lvm[270207]: VG ceph_vg1 finished
Jan 30 23:59:06 np0005603435 lvm[270209]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 30 23:59:06 np0005603435 lvm[270209]: VG ceph_vg2 finished
Jan 30 23:59:06 np0005603435 musing_meninsky[270128]: {}
Jan 30 23:59:06 np0005603435 systemd[1]: libpod-3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93.scope: Deactivated successfully.
Jan 30 23:59:06 np0005603435 systemd[1]: libpod-3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93.scope: Consumed 1.009s CPU time.
Jan 30 23:59:06 np0005603435 podman[270112]: 2026-01-31 04:59:06.809954219 +0000 UTC m=+0.881707068 container died 3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 30 23:59:06 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3e977aebae9ad73c47869d72c6ccddb514a2b543bba43d5d9e39d239a857e335-merged.mount: Deactivated successfully.
Jan 30 23:59:06 np0005603435 podman[270112]: 2026-01-31 04:59:06.864265935 +0000 UTC m=+0.936018784 container remove 3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_meninsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 30 23:59:06 np0005603435 systemd[1]: libpod-conmon-3c48eba33c713031ad4d65d52c1ff459ab173e84675c028741872e41083ebc93.scope: Deactivated successfully.
Jan 30 23:59:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 30 23:59:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:59:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 30 23:59:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:59:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:59:06 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:59:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:59:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 173 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 156 KiB/s wr, 328 op/s
Jan 30 23:59:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 30 23:59:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:59:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:59:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:59:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:59:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Jan 30 23:59:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Jan 30 23:59:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Jan 30 23:59:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 30 23:59:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 30 23:59:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 30 23:59:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 30 23:59:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 30 23:59:08 np0005603435 nova_compute[239938]: 2026-01-31 04:59:08.731 239942 DEBUG oslo_concurrency.lockutils [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:08 np0005603435 nova_compute[239938]: 2026-01-31 04:59:08.732 239942 DEBUG oslo_concurrency.lockutils [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:08 np0005603435 nova_compute[239938]: 2026-01-31 04:59:08.747 239942 INFO nova.compute.manager [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Detaching volume a5737f0c-c356-4013-9822-ddd3c9ecef41#033[00m
Jan 30 23:59:08 np0005603435 nova_compute[239938]: 2026-01-31 04:59:08.998 239942 INFO nova.virt.block_device [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to driver detach volume a5737f0c-c356-4013-9822-ddd3c9ecef41 from mountpoint /dev/vdb#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.011 239942 DEBUG nova.virt.libvirt.driver [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Attempting to detach device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.012 239942 DEBUG nova.virt.libvirt.guest [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-a5737f0c-c356-4013-9822-ddd3c9ecef41">
Jan 30 23:59:09 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <serial>a5737f0c-c356-4013-9822-ddd3c9ecef41</serial>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:59:09 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:09 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.022 239942 INFO nova.virt.libvirt.driver [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully detached device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the persistent domain config.#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.022 239942 DEBUG nova.virt.libvirt.driver [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.023 239942 DEBUG nova.virt.libvirt.guest [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-a5737f0c-c356-4013-9822-ddd3c9ecef41">
Jan 30 23:59:09 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <serial>a5737f0c-c356-4013-9822-ddd3c9ecef41</serial>
Jan 30 23:59:09 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:59:09 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:09 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.148 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835549.1481085, e718387a-7f1c-476e-a53d-69bf63413c12 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.150 239942 DEBUG nova.virt.libvirt.driver [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e718387a-7f1c-476e-a53d-69bf63413c12 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.152 239942 INFO nova.virt.libvirt.driver [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully detached device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the live domain config.#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.457 239942 DEBUG nova.objects.instance [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:09 np0005603435 nova_compute[239938]: 2026-01-31 04:59:09.519 239942 DEBUG oslo_concurrency.lockutils [None req-f2be1bd2-5e59-42bb-9200-f4819623cab6 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 173 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 100 KiB/s wr, 143 op/s
Jan 30 23:59:10 np0005603435 nova_compute[239938]: 2026-01-31 04:59:10.454 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:11 np0005603435 nova_compute[239938]: 2026-01-31 04:59:11.388 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 173 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 91 KiB/s wr, 118 op/s
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.426 239942 DEBUG oslo_concurrency.lockutils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.427 239942 DEBUG oslo_concurrency.lockutils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.446 239942 DEBUG nova.objects.instance [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.476 239942 DEBUG oslo_concurrency.lockutils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.708 239942 DEBUG oslo_concurrency.lockutils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.709 239942 DEBUG oslo_concurrency.lockutils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.709 239942 INFO nova.compute.manager [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attaching volume 65810aec-0ff2-449f-ab34-408fa4ef8839 to /dev/vdb#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.862 239942 DEBUG os_brick.utils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.864 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.877 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.877 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[70fa8bf1-d278-4621-a4f9-6dea2f5f2882]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.879 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.887 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.887 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[65d1aaed-d952-4c73-9f9a-842db88b1090]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.889 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.900 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.900 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bb4ce7-fdba-4a3f-900f-d1a59958e59e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.902 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[0de8cfd6-b827-4ac9-838d-1c0e9a0f43ed]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.903 239942 DEBUG oslo_concurrency.processutils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.931 239942 DEBUG oslo_concurrency.processutils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.934 239942 DEBUG os_brick.initiator.connectors.lightos [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.934 239942 DEBUG os_brick.initiator.connectors.lightos [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.935 239942 DEBUG os_brick.initiator.connectors.lightos [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.935 239942 DEBUG os_brick.utils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:59:12 np0005603435 nova_compute[239938]: 2026-01-31 04:59:12.935 239942 DEBUG nova.virt.block_device [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating existing volume attachment record: ae173022-9bae-48f8-8384-5c643d4814a1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:59:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:13 np0005603435 nova_compute[239938]: 2026-01-31 04:59:13.500 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 173 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 93 KiB/s wr, 130 op/s
Jan 30 23:59:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:59:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664899735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:59:14 np0005603435 nova_compute[239938]: 2026-01-31 04:59:14.073 239942 DEBUG nova.objects.instance [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:14 np0005603435 nova_compute[239938]: 2026-01-31 04:59:14.149 239942 DEBUG nova.virt.libvirt.driver [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to attach volume 65810aec-0ff2-449f-ab34-408fa4ef8839 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:59:14 np0005603435 nova_compute[239938]: 2026-01-31 04:59:14.153 239942 DEBUG nova.virt.libvirt.guest [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:59:14 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:14 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-65810aec-0ff2-449f-ab34-408fa4ef8839">
Jan 30 23:59:14 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:14 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:14 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:59:14 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:59:14 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:59:14 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:14 np0005603435 nova_compute[239938]:  <serial>65810aec-0ff2-449f-ab34-408fa4ef8839</serial>
Jan 30 23:59:14 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:14 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:59:14 np0005603435 nova_compute[239938]: 2026-01-31 04:59:14.339 239942 DEBUG nova.virt.libvirt.driver [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:14 np0005603435 nova_compute[239938]: 2026-01-31 04:59:14.340 239942 DEBUG nova.virt.libvirt.driver [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:14 np0005603435 nova_compute[239938]: 2026-01-31 04:59:14.340 239942 DEBUG nova.virt.libvirt.driver [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:14 np0005603435 nova_compute[239938]: 2026-01-31 04:59:14.341 239942 DEBUG nova.virt.libvirt.driver [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No VIF found with MAC fa:16:3e:7d:c2:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:59:14 np0005603435 nova_compute[239938]: 2026-01-31 04:59:14.709 239942 DEBUG oslo_concurrency.lockutils [None req-f52afeb3-092c-43a6-8380-ff39b2ab1d6d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:15 np0005603435 nova_compute[239938]: 2026-01-31 04:59:15.456 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 173 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 80 KiB/s wr, 106 op/s
Jan 30 23:59:16 np0005603435 nova_compute[239938]: 2026-01-31 04:59:16.391 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:16 np0005603435 nova_compute[239938]: 2026-01-31 04:59:16.650 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.310 239942 DEBUG oslo_concurrency.lockutils [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.311 239942 DEBUG oslo_concurrency.lockutils [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.325 239942 INFO nova.compute.manager [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Detaching volume 65810aec-0ff2-449f-ab34-408fa4ef8839#033[00m
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007691444064860249 of space, bias 1.0, pg target 0.23074332194580746 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00038732907622557734 of space, bias 1.0, pg target 0.1161987228676732 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.1123095343042616e-06 of space, bias 1.0, pg target 0.0003336928602912785 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672173045673135 of space, bias 1.0, pg target 0.20016519137019406 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.265084258403257e-07 of space, bias 4.0, pg target 0.000991810111008391 quantized to 16 (current 16)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.487 239942 INFO nova.virt.block_device [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to driver detach volume 65810aec-0ff2-449f-ab34-408fa4ef8839 from mountpoint /dev/vdb#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.499 239942 DEBUG nova.virt.libvirt.driver [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Attempting to detach device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.500 239942 DEBUG nova.virt.libvirt.guest [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-65810aec-0ff2-449f-ab34-408fa4ef8839">
Jan 30 23:59:17 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <serial>65810aec-0ff2-449f-ab34-408fa4ef8839</serial>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:59:17 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:17 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.513 239942 INFO nova.virt.libvirt.driver [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully detached device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the persistent domain config.#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.514 239942 DEBUG nova.virt.libvirt.driver [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.515 239942 DEBUG nova.virt.libvirt.guest [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-65810aec-0ff2-449f-ab34-408fa4ef8839">
Jan 30 23:59:17 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <serial>65810aec-0ff2-449f-ab34-408fa4ef8839</serial>
Jan 30 23:59:17 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:59:17 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:17 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 173 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 82 KiB/s wr, 39 op/s
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.630 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835557.630193, e718387a-7f1c-476e-a53d-69bf63413c12 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.632 239942 DEBUG nova.virt.libvirt.driver [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e718387a-7f1c-476e-a53d-69bf63413c12 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.635 239942 INFO nova.virt.libvirt.driver [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully detached device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the live domain config.#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.804 239942 DEBUG nova.objects.instance [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:17 np0005603435 nova_compute[239938]: 2026-01-31 04:59:17.849 239942 DEBUG oslo_concurrency.lockutils [None req-33509fc4-afbd-4b99-ad04-55d4e502b33d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:17 np0005603435 ceph-mgr[75599]: [devicehealth INFO root] Check health
Jan 30 23:59:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 173 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 73 KiB/s wr, 35 op/s
Jan 30 23:59:20 np0005603435 nova_compute[239938]: 2026-01-31 04:59:20.458 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:59:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3816932976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.394 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.524 239942 DEBUG oslo_concurrency.lockutils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.524 239942 DEBUG oslo_concurrency.lockutils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 173 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 69 KiB/s wr, 35 op/s
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.538 239942 DEBUG nova.objects.instance [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.571 239942 DEBUG oslo_concurrency.lockutils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Jan 30 23:59:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Jan 30 23:59:21 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.755 239942 DEBUG oslo_concurrency.lockutils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.756 239942 DEBUG oslo_concurrency.lockutils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.757 239942 INFO nova.compute.manager [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attaching volume 7be3ee60-eefb-4dbf-a83c-3973d4dc96f8 to /dev/vdb#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.905 239942 DEBUG os_brick.utils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.906 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.915 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.915 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[8d025da1-7b98-4383-bb2b-9955ca749cad]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.916 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.923 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.923 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab0f54e-9e7d-499c-a5cf-f6a14e5cf8b1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.924 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.933 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.933 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[95d00405-bc91-4053-82af-ab5f967f3827]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.935 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[67a77e89-5710-4fb6-aea0-f81c2c07855b]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.936 239942 DEBUG oslo_concurrency.processutils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.955 239942 DEBUG oslo_concurrency.processutils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.959 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.959 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.960 239942 DEBUG os_brick.initiator.connectors.lightos [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.960 239942 DEBUG os_brick.utils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 30 23:59:21 np0005603435 nova_compute[239938]: 2026-01-31 04:59:21.961 239942 DEBUG nova.virt.block_device [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating existing volume attachment record: c39c82fc-f6aa-4f7b-bd75-2b253f2ea653 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 30 23:59:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Jan 30 23:59:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Jan 30 23:59:22 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Jan 30 23:59:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:59:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733996161' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:59:22 np0005603435 nova_compute[239938]: 2026-01-31 04:59:22.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:22 np0005603435 nova_compute[239938]: 2026-01-31 04:59:22.979 239942 DEBUG nova.objects.instance [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:23 np0005603435 nova_compute[239938]: 2026-01-31 04:59:23.006 239942 DEBUG nova.virt.libvirt.driver [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to attach volume 7be3ee60-eefb-4dbf-a83c-3973d4dc96f8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 30 23:59:23 np0005603435 nova_compute[239938]: 2026-01-31 04:59:23.008 239942 DEBUG nova.virt.libvirt.guest [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] attach device xml: <disk type="network" device="disk">
Jan 30 23:59:23 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:23 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-7be3ee60-eefb-4dbf-a83c-3973d4dc96f8">
Jan 30 23:59:23 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:23 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:23 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 30 23:59:23 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 30 23:59:23 np0005603435 nova_compute[239938]:  </auth>
Jan 30 23:59:23 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:23 np0005603435 nova_compute[239938]:  <serial>7be3ee60-eefb-4dbf-a83c-3973d4dc96f8</serial>
Jan 30 23:59:23 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:23 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 30 23:59:23 np0005603435 nova_compute[239938]: 2026-01-31 04:59:23.146 239942 DEBUG nova.virt.libvirt.driver [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:23 np0005603435 nova_compute[239938]: 2026-01-31 04:59:23.146 239942 DEBUG nova.virt.libvirt.driver [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:23 np0005603435 nova_compute[239938]: 2026-01-31 04:59:23.147 239942 DEBUG nova.virt.libvirt.driver [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 30 23:59:23 np0005603435 nova_compute[239938]: 2026-01-31 04:59:23.147 239942 DEBUG nova.virt.libvirt.driver [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] No VIF found with MAC fa:16:3e:7d:c2:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 30 23:59:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e450 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:23 np0005603435 nova_compute[239938]: 2026-01-31 04:59:23.380 239942 DEBUG oslo_concurrency.lockutils [None req-6a3ed0cb-f38d-4b9b-a3a0-77f22b84f047 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 173 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 88 KiB/s wr, 51 op/s
Jan 30 23:59:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2644670151' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2644670151' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:23 np0005603435 nova_compute[239938]: 2026-01-31 04:59:23.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:24 np0005603435 nova_compute[239938]: 2026-01-31 04:59:24.882 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:24 np0005603435 nova_compute[239938]: 2026-01-31 04:59:24.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:24 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:59:24 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658102903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.462 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 189 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.785 239942 DEBUG oslo_concurrency.lockutils [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.785 239942 DEBUG oslo_concurrency.lockutils [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.800 239942 INFO nova.compute.manager [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Detaching volume 7be3ee60-eefb-4dbf-a83c-3973d4dc96f8#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.973 239942 INFO nova.virt.block_device [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Attempting to driver detach volume 7be3ee60-eefb-4dbf-a83c-3973d4dc96f8 from mountpoint /dev/vdb#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.982 239942 DEBUG nova.virt.libvirt.driver [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Attempting to detach device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.983 239942 DEBUG nova.virt.libvirt.guest [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-7be3ee60-eefb-4dbf-a83c-3973d4dc96f8">
Jan 30 23:59:25 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <serial>7be3ee60-eefb-4dbf-a83c-3973d4dc96f8</serial>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:59:25 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:25 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.989 239942 INFO nova.virt.libvirt.driver [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully detached device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the persistent domain config.#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.989 239942 DEBUG nova.virt.libvirt.driver [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 30 23:59:25 np0005603435 nova_compute[239938]: 2026-01-31 04:59:25.990 239942 DEBUG nova.virt.libvirt.guest [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] detach device xml: <disk type="network" device="disk">
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-7be3ee60-eefb-4dbf-a83c-3973d4dc96f8">
Jan 30 23:59:25 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  </source>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <serial>7be3ee60-eefb-4dbf-a83c-3973d4dc96f8</serial>
Jan 30 23:59:25 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 30 23:59:25 np0005603435 nova_compute[239938]: </disk>
Jan 30 23:59:25 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 30 23:59:26 np0005603435 nova_compute[239938]: 2026-01-31 04:59:26.191 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835566.1909924, e718387a-7f1c-476e-a53d-69bf63413c12 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 30 23:59:26 np0005603435 nova_compute[239938]: 2026-01-31 04:59:26.193 239942 DEBUG nova.virt.libvirt.driver [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e718387a-7f1c-476e-a53d-69bf63413c12 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 30 23:59:26 np0005603435 nova_compute[239938]: 2026-01-31 04:59:26.196 239942 INFO nova.virt.libvirt.driver [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully detached device vdb from instance e718387a-7f1c-476e-a53d-69bf63413c12 from the live domain config.#033[00m
Jan 30 23:59:26 np0005603435 nova_compute[239938]: 2026-01-31 04:59:26.397 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:26 np0005603435 nova_compute[239938]: 2026-01-31 04:59:26.443 239942 DEBUG nova.objects.instance [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'flavor' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:26 np0005603435 nova_compute[239938]: 2026-01-31 04:59:26.494 239942 DEBUG oslo_concurrency.lockutils [None req-a46e7cab-16fe-4f43-b3c4-607b2d5c522d d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:26 np0005603435 nova_compute[239938]: 2026-01-31 04:59:26.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:27 np0005603435 ceph-osd[85822]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 30 23:59:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 438 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 215 KiB/s rd, 33 MiB/s wr, 194 op/s
Jan 30 23:59:27 np0005603435 nova_compute[239938]: 2026-01-31 04:59:27.629 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:27 np0005603435 nova_compute[239938]: 2026-01-31 04:59:27.629 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:27 np0005603435 nova_compute[239938]: 2026-01-31 04:59:27.630 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:27 np0005603435 nova_compute[239938]: 2026-01-31 04:59:27.630 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 30 23:59:27 np0005603435 nova_compute[239938]: 2026-01-31 04:59:27.630 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/797757797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/797757797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:59:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788747559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.277 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e450 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.373 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.373 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.515 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.516 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4180MB free_disk=59.94208997581154GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.516 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.517 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.596 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance e718387a-7f1c-476e-a53d-69bf63413c12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.596 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.596 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 30 23:59:28 np0005603435 nova_compute[239938]: 2026-01-31 04:59:28.651 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/897661638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/897661638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:59:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1564378348' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:59:29 np0005603435 nova_compute[239938]: 2026-01-31 04:59:29.299 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:29 np0005603435 nova_compute[239938]: 2026-01-31 04:59:29.305 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:59:29 np0005603435 nova_compute[239938]: 2026-01-31 04:59:29.340 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:59:29 np0005603435 nova_compute[239938]: 2026-01-31 04:59:29.368 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 30 23:59:29 np0005603435 nova_compute[239938]: 2026-01-31 04:59:29.369 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 438 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 33 MiB/s wr, 191 op/s
Jan 30 23:59:30 np0005603435 nova_compute[239938]: 2026-01-31 04:59:30.370 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:30 np0005603435 nova_compute[239938]: 2026-01-31 04:59:30.370 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 30 23:59:30 np0005603435 nova_compute[239938]: 2026-01-31 04:59:30.370 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 30 23:59:30 np0005603435 nova_compute[239938]: 2026-01-31 04:59:30.464 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809743697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809743697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:30 np0005603435 nova_compute[239938]: 2026-01-31 04:59:30.629 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 30 23:59:30 np0005603435 nova_compute[239938]: 2026-01-31 04:59:30.630 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 30 23:59:30 np0005603435 nova_compute[239938]: 2026-01-31 04:59:30.631 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 30 23:59:30 np0005603435 nova_compute[239938]: 2026-01-31 04:59:30.632 239942 DEBUG nova.objects.instance [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/348993152' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/348993152' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:31 np0005603435 nova_compute[239938]: 2026-01-31 04:59:31.401 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 710 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 54 MiB/s wr, 155 op/s
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.760679) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835571760707, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2422, "num_deletes": 264, "total_data_size": 3503412, "memory_usage": 3571872, "flush_reason": "Manual Compaction"}
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835571777720, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3435665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31630, "largest_seqno": 34051, "table_properties": {"data_size": 3424102, "index_size": 7609, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24315, "raw_average_key_size": 21, "raw_value_size": 3400903, "raw_average_value_size": 2999, "num_data_blocks": 328, "num_entries": 1134, "num_filter_entries": 1134, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769835412, "oldest_key_time": 1769835412, "file_creation_time": 1769835571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 17125 microseconds, and 6108 cpu microseconds.
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.777791) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3435665 bytes OK
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.777825) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.784510) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.784555) EVENT_LOG_v1 {"time_micros": 1769835571784543, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.784585) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3492938, prev total WAL file size 3492938, number of live WAL files 2.
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.785677) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3355KB)], [65(10230KB)]
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835571785743, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13912075, "oldest_snapshot_seqno": -1}
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6683 keys, 12159609 bytes, temperature: kUnknown
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835571846422, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 12159609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12106652, "index_size": 35117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 167331, "raw_average_key_size": 25, "raw_value_size": 11978428, "raw_average_value_size": 1792, "num_data_blocks": 1412, "num_entries": 6683, "num_filter_entries": 6683, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769835571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.846693) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 12159609 bytes
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.848595) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.1 rd, 200.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7216, records dropped: 533 output_compression: NoCompression
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.848615) EVENT_LOG_v1 {"time_micros": 1769835571848605, "job": 36, "event": "compaction_finished", "compaction_time_micros": 60737, "compaction_time_cpu_micros": 33303, "output_level": 6, "num_output_files": 1, "total_output_size": 12159609, "num_input_records": 7216, "num_output_records": 6683, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835571849150, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835571850179, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.785555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.850276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.850287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.850304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.850309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:59:31 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-04:59:31.850313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 30 23:59:32 np0005603435 nova_compute[239938]: 2026-01-31 04:59:32.038 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating instance_info_cache with network_info: [{"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:59:32 np0005603435 nova_compute[239938]: 2026-01-31 04:59:32.052 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-e718387a-7f1c-476e-a53d-69bf63413c12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 30 23:59:32 np0005603435 nova_compute[239938]: 2026-01-31 04:59:32.053 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 30 23:59:32 np0005603435 nova_compute[239938]: 2026-01-31 04:59:32.053 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:32 np0005603435 nova_compute[239938]: 2026-01-31 04:59:32.053 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 30 23:59:32 np0005603435 nova_compute[239938]: 2026-01-31 04:59:32.054 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 30 23:59:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131611593' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131611593' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1281737469' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1281737469' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2287421155' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2287421155' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e451 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 102 MiB/s wr, 390 op/s
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Jan 30 23:59:33 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Jan 30 23:59:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Jan 30 23:59:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Jan 30 23:59:34 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Jan 30 23:59:35 np0005603435 podman[270356]: 2026-01-31 04:59:35.108064781 +0000 UTC m=+0.067583101 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 30 23:59:35 np0005603435 podman[270357]: 2026-01-31 04:59:35.178736456 +0000 UTC m=+0.135706534 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Jan 30 23:59:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:59:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2233058151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:59:35 np0005603435 nova_compute[239938]: 2026-01-31 04:59:35.465 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 127 MiB/s wr, 454 op/s
Jan 30 23:59:35 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:35.546 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:59:35 np0005603435 nova_compute[239938]: 2026-01-31 04:59:35.547 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:35 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:35.548 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 30 23:59:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Jan 30 23:59:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Jan 30 23:59:35 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Jan 30 23:59:36 np0005603435 nova_compute[239938]: 2026-01-31 04:59:36.404 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Jan 30 23:59:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Jan 30 23:59:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Jan 30 23:59:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:59:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:59:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:59:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:59:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 30 23:59:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 30 23:59:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 12 KiB/s wr, 231 op/s
Jan 30 23:59:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 30 23:59:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3955997152' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 30 23:59:37 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 30 23:59:37 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3955997152' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 30 23:59:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:59:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2274262687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:59:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e455 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Jan 30 23:59:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Jan 30 23:59:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.844 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.845 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.845 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.845 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.845 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.847 239942 INFO nova.compute.manager [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Terminating instance#033[00m
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.848 239942 DEBUG nova.compute.manager [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 30 23:59:38 np0005603435 kernel: tap39e41855-7c (unregistering): left promiscuous mode
Jan 30 23:59:38 np0005603435 NetworkManager[49097]: <info>  [1769835578.8960] device (tap39e41855-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 30 23:59:38 np0005603435 ovn_controller[145670]: 2026-01-31T04:59:38Z|00247|binding|INFO|Releasing lport 39e41855-7c54-477d-957b-aa769bd16f60 from this chassis (sb_readonly=0)
Jan 30 23:59:38 np0005603435 ovn_controller[145670]: 2026-01-31T04:59:38Z|00248|binding|INFO|Setting lport 39e41855-7c54-477d-957b-aa769bd16f60 down in Southbound
Jan 30 23:59:38 np0005603435 ovn_controller[145670]: 2026-01-31T04:59:38Z|00249|binding|INFO|Removing iface tap39e41855-7c ovn-installed in OVS
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.907 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:38 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:38.913 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:c2:ab 10.100.0.13'], port_security=['fa:16:3e:7d:c2:ab 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e718387a-7f1c-476e-a53d-69bf63413c12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfbf01be-0e13-4ab0-b168-f61a3eca460e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5ae37c02aa74bf084cd851f4b233192', 'neutron:revision_number': '4', 'neutron:security_group_ids': '14d701b1-eb59-4eaa-8423-1a8f9ada9f00 7504c8ae-803d-4af9-8341-c0a2007c947a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9447addc-5d26-4056-a129-d4a7951ac825, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=39e41855-7c54-477d-957b-aa769bd16f60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 30 23:59:38 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:38.914 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 39e41855-7c54-477d-957b-aa769bd16f60 in datapath dfbf01be-0e13-4ab0-b168-f61a3eca460e unbound from our chassis#033[00m
Jan 30 23:59:38 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:38.916 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfbf01be-0e13-4ab0-b168-f61a3eca460e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 30 23:59:38 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:38.917 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad2d1ad-7292-4e0a-92a7-4ea504f8f9a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:38 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:38.918 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e namespace which is not needed anymore#033[00m
Jan 30 23:59:38 np0005603435 nova_compute[239938]: 2026-01-31 04:59:38.922 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:38 np0005603435 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Jan 30 23:59:38 np0005603435 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 17.985s CPU time.
Jan 30 23:59:38 np0005603435 systemd-machined[208030]: Machine qemu-25-instance-00000019 terminated.
Jan 30 23:59:39 np0005603435 neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e[269286]: [NOTICE]   (269291) : haproxy version is 2.8.14-c23fe91
Jan 30 23:59:39 np0005603435 neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e[269286]: [NOTICE]   (269291) : path to executable is /usr/sbin/haproxy
Jan 30 23:59:39 np0005603435 neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e[269286]: [WARNING]  (269291) : Exiting Master process...
Jan 30 23:59:39 np0005603435 neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e[269286]: [ALERT]    (269291) : Current worker (269294) exited with code 143 (Terminated)
Jan 30 23:59:39 np0005603435 neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e[269286]: [WARNING]  (269291) : All workers exited. Exiting... (0)
Jan 30 23:59:39 np0005603435 systemd[1]: libpod-ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4.scope: Deactivated successfully.
Jan 30 23:59:39 np0005603435 podman[270426]: 2026-01-31 04:59:39.056746315 +0000 UTC m=+0.052984534 container died ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.079 239942 INFO nova.virt.libvirt.driver [-] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Instance destroyed successfully.#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.081 239942 DEBUG nova.objects.instance [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lazy-loading 'resources' on Instance uuid e718387a-7f1c-476e-a53d-69bf63413c12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 30 23:59:39 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4-userdata-shm.mount: Deactivated successfully.
Jan 30 23:59:39 np0005603435 systemd[1]: var-lib-containers-storage-overlay-d91e75a76da878aeba9bea785ac466d82798201c16c57075dbdbfb8150793f19-merged.mount: Deactivated successfully.
Jan 30 23:59:39 np0005603435 podman[270426]: 2026-01-31 04:59:39.100145005 +0000 UTC m=+0.096383214 container cleanup ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.101 239942 DEBUG nova.virt.libvirt.vif [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T04:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-2059263885',display_name='tempest-SnapshotDataIntegrityTests-server-2059263885',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-2059263885',id=25,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE2J6xl6hptAfBhj9MwPwWCmhY45b3CdZ5A/KSqFDwnfy73lo20B4Qjtjt+VnhVw51fanwz/3MNA+u3YW8BvStB65Bdfgg8zT2n0/Q1yWanzHWJwhqoA4bflv4fCMn1fQ==',key_name='tempest-keypair-154687870',keypairs=<?>,launch_index=0,launched_at=2026-01-31T04:58:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5ae37c02aa74bf084cd851f4b233192',ramdisk_id='',reservation_id='r-j4m9vxl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-800856993',owner_user_name='tempest-SnapshotDataIntegrityTests-800856993-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T04:58:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d1424589a4cc422c930f4c65f8538d1a',uuid=e718387a-7f1c-476e-a53d-69bf63413c12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.102 239942 DEBUG nova.network.os_vif_util [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Converting VIF {"id": "39e41855-7c54-477d-957b-aa769bd16f60", "address": "fa:16:3e:7d:c2:ab", "network": {"id": "dfbf01be-0e13-4ab0-b168-f61a3eca460e", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-390049787-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5ae37c02aa74bf084cd851f4b233192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39e41855-7c", "ovs_interfaceid": "39e41855-7c54-477d-957b-aa769bd16f60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.103 239942 DEBUG nova.network.os_vif_util [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:c2:ab,bridge_name='br-int',has_traffic_filtering=True,id=39e41855-7c54-477d-957b-aa769bd16f60,network=Network(dfbf01be-0e13-4ab0-b168-f61a3eca460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39e41855-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.103 239942 DEBUG os_vif [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:c2:ab,bridge_name='br-int',has_traffic_filtering=True,id=39e41855-7c54-477d-957b-aa769bd16f60,network=Network(dfbf01be-0e13-4ab0-b168-f61a3eca460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39e41855-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 30 23:59:39 np0005603435 systemd[1]: libpod-conmon-ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4.scope: Deactivated successfully.
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.104 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.106 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39e41855-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.108 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.111 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.114 239942 INFO os_vif [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:c2:ab,bridge_name='br-int',has_traffic_filtering=True,id=39e41855-7c54-477d-957b-aa769bd16f60,network=Network(dfbf01be-0e13-4ab0-b168-f61a3eca460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39e41855-7c')#033[00m
Jan 30 23:59:39 np0005603435 podman[270465]: 2026-01-31 04:59:39.173105196 +0000 UTC m=+0.052758169 container remove ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.177 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8443f8-2e6c-411f-ba8b-3bc201c32fba]: (4, ('Sat Jan 31 04:59:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e (ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4)\nddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4\nSat Jan 31 04:59:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e (ddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4)\nddecc7a50c17987385a00aed56387eb577b9b6d0be8acdcd2e36083d29fd19d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.179 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf4600b-0796-4387-bbdd-db76f83aaf5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.180 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfbf01be-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.182 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:39 np0005603435 kernel: tapdfbf01be-00: left promiscuous mode
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.189 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.193 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[96eb85f5-dade-409b-9138-ef2d405a0e0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.195 239942 DEBUG nova.compute.manager [req-f8c6f96f-8f7f-471f-a645-305ea7aea5a2 req-a3797769-3a31-43f0-a8b8-d53b9f9a0107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received event network-vif-unplugged-39e41855-7c54-477d-957b-aa769bd16f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.195 239942 DEBUG oslo_concurrency.lockutils [req-f8c6f96f-8f7f-471f-a645-305ea7aea5a2 req-a3797769-3a31-43f0-a8b8-d53b9f9a0107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.196 239942 DEBUG oslo_concurrency.lockutils [req-f8c6f96f-8f7f-471f-a645-305ea7aea5a2 req-a3797769-3a31-43f0-a8b8-d53b9f9a0107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.196 239942 DEBUG oslo_concurrency.lockutils [req-f8c6f96f-8f7f-471f-a645-305ea7aea5a2 req-a3797769-3a31-43f0-a8b8-d53b9f9a0107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.197 239942 DEBUG nova.compute.manager [req-f8c6f96f-8f7f-471f-a645-305ea7aea5a2 req-a3797769-3a31-43f0-a8b8-d53b9f9a0107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] No waiting events found dispatching network-vif-unplugged-39e41855-7c54-477d-957b-aa769bd16f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.197 239942 DEBUG nova.compute.manager [req-f8c6f96f-8f7f-471f-a645-305ea7aea5a2 req-a3797769-3a31-43f0-a8b8-d53b9f9a0107 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received event network-vif-unplugged-39e41855-7c54-477d-957b-aa769bd16f60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.209 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[41335b99-9b40-4adc-b4a3-74e51564fef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.210 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7265cf98-6347-4ea9-828a-51443ed8fc3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.222 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fa63fd-de1f-4235-8bc2-ab7539999b44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456230, 'reachable_time': 35115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270496, 'error': None, 'target': 'ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.224 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dfbf01be-0e13-4ab0-b168-f61a3eca460e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 30 23:59:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:39.224 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[5024bd2b-f839-44d9-a4ce-fe81eb1e5e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 30 23:59:39 np0005603435 systemd[1]: run-netns-ovnmeta\x2ddfbf01be\x2d0e13\x2d4ab0\x2db168\x2df61a3eca460e.mount: Deactivated successfully.
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.386 239942 INFO nova.virt.libvirt.driver [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Deleting instance files /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12_del#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.387 239942 INFO nova.virt.libvirt.driver [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Deletion of /var/lib/nova/instances/e718387a-7f1c-476e-a53d-69bf63413c12_del complete#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.441 239942 INFO nova.compute.manager [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.441 239942 DEBUG oslo.service.loopingcall [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.442 239942 DEBUG nova.compute.manager [-] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 30 23:59:39 np0005603435 nova_compute[239938]: 2026-01-31 04:59:39.442 239942 DEBUG nova.network.neutron [-] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 30 23:59:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 10 KiB/s wr, 194 op/s
Jan 30 23:59:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Jan 30 23:59:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Jan 30 23:59:39 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Jan 30 23:59:40 np0005603435 nova_compute[239938]: 2026-01-31 04:59:40.468 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Jan 30 23:59:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Jan 30 23:59:40 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Jan 30 23:59:40 np0005603435 nova_compute[239938]: 2026-01-31 04:59:40.982 239942 DEBUG nova.network.neutron [-] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.001 239942 INFO nova.compute.manager [-] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Took 1.56 seconds to deallocate network for instance.#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.042 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.042 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.102 239942 DEBUG nova.compute.manager [req-984aaff6-716e-4588-a7ec-8dde1b97e5de req-7b9577da-de20-4cd7-98bb-6a4be072a27e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received event network-vif-deleted-39e41855-7c54-477d-957b-aa769bd16f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.127 239942 DEBUG oslo_concurrency.processutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.274 239942 DEBUG nova.compute.manager [req-ae527502-38bc-4a11-accb-a2ac48aac1c3 req-150d7e41-d298-4b32-b03e-e2a9d914b8c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received event network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.274 239942 DEBUG oslo_concurrency.lockutils [req-ae527502-38bc-4a11-accb-a2ac48aac1c3 req-150d7e41-d298-4b32-b03e-e2a9d914b8c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.275 239942 DEBUG oslo_concurrency.lockutils [req-ae527502-38bc-4a11-accb-a2ac48aac1c3 req-150d7e41-d298-4b32-b03e-e2a9d914b8c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.275 239942 DEBUG oslo_concurrency.lockutils [req-ae527502-38bc-4a11-accb-a2ac48aac1c3 req-150d7e41-d298-4b32-b03e-e2a9d914b8c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.275 239942 DEBUG nova.compute.manager [req-ae527502-38bc-4a11-accb-a2ac48aac1c3 req-150d7e41-d298-4b32-b03e-e2a9d914b8c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] No waiting events found dispatching network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.276 239942 WARNING nova.compute.manager [req-ae527502-38bc-4a11-accb-a2ac48aac1c3 req-150d7e41-d298-4b32-b03e-e2a9d914b8c7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Received unexpected event network-vif-plugged-39e41855-7c54-477d-957b-aa769bd16f60 for instance with vm_state deleted and task_state None.#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.424 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 1.2 GiB data, 1.5 GiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 6.1 KiB/s wr, 137 op/s
Jan 30 23:59:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 30 23:59:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4235446739' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.647 239942 DEBUG oslo_concurrency.processutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.652 239942 DEBUG nova.compute.provider_tree [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.672 239942 DEBUG nova.scheduler.client.report [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.694 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.735 239942 INFO nova.scheduler.client.report [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Deleted allocations for instance e718387a-7f1c-476e-a53d-69bf63413c12#033[00m
Jan 30 23:59:41 np0005603435 nova_compute[239938]: 2026-01-31 04:59:41.823 239942 DEBUG oslo_concurrency.lockutils [None req-87ffe32c-0869-4eea-b16b-1cef2e52f119 d1424589a4cc422c930f4c65f8538d1a f5ae37c02aa74bf084cd851f4b233192 - - default default] Lock "e718387a-7f1c-476e-a53d-69bf63413c12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 30 23:59:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4176048910' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 30 23:59:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 5.3 KiB/s wr, 134 op/s
Jan 30 23:59:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Jan 30 23:59:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Jan 30 23:59:43 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Jan 30 23:59:43 np0005603435 nova_compute[239938]: 2026-01-31 04:59:43.757 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:44 np0005603435 nova_compute[239938]: 2026-01-31 04:59:44.108 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:44 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:44.550 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 30 23:59:45 np0005603435 nova_compute[239938]: 2026-01-31 04:59:45.470 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 11 MiB/s wr, 137 op/s
Jan 30 23:59:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 1.6 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 173 KiB/s rd, 63 MiB/s wr, 272 op/s
Jan 30 23:59:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e459 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Jan 30 23:59:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Jan 30 23:59:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Jan 30 23:59:49 np0005603435 nova_compute[239938]: 2026-01-31 04:59:49.109 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:49 np0005603435 nova_compute[239938]: 2026-01-31 04:59:49.226 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 1.6 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 152 KiB/s rd, 61 MiB/s wr, 240 op/s
Jan 30 23:59:50 np0005603435 nova_compute[239938]: 2026-01-31 04:59:50.471 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Jan 30 23:59:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Jan 30 23:59:51 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Jan 30 23:59:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 1.8 GiB data, 2.0 GiB used, 58 GiB / 60 GiB avail; 96 KiB/s rd, 88 MiB/s wr, 171 op/s
Jan 30 23:59:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Jan 30 23:59:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Jan 30 23:59:53 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Jan 30 23:59:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 119 KiB/s rd, 89 MiB/s wr, 209 op/s
Jan 30 23:59:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:54 np0005603435 nova_compute[239938]: 2026-01-31 04:59:54.077 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835579.0766273, e718387a-7f1c-476e-a53d-69bf63413c12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 30 23:59:54 np0005603435 nova_compute[239938]: 2026-01-31 04:59:54.078 239942 INFO nova.compute.manager [-] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] VM Stopped (Lifecycle Event)#033[00m
Jan 30 23:59:54 np0005603435 nova_compute[239938]: 2026-01-31 04:59:54.105 239942 DEBUG nova.compute.manager [None req-14213f57-8e95-480f-ad92-1a0cf34747bf - - - - - -] [instance: e718387a-7f1c-476e-a53d-69bf63413c12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 30 23:59:54 np0005603435 nova_compute[239938]: 2026-01-31 04:59:54.149 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:55 np0005603435 nova_compute[239938]: 2026-01-31 04:59:55.529 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 108 KiB/s rd, 80 MiB/s wr, 189 op/s
Jan 30 23:59:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:55.923 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 30 23:59:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:55.924 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 30 23:59:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 04:59:55.924 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 30 23:59:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 111 KiB/s rd, 68 MiB/s wr, 187 op/s
Jan 30 23:59:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Jan 30 23:59:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Jan 30 23:59:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Jan 30 23:59:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e463 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 30 23:59:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Jan 30 23:59:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Jan 30 23:59:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Jan 30 23:59:59 np0005603435 nova_compute[239938]: 2026-01-31 04:59:59.153 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 30 23:59:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 28 KiB/s rd, 660 KiB/s wr, 39 op/s
Jan 31 00:00:00 np0005603435 nova_compute[239938]: 2026-01-31 05:00:00.531 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 590 KiB/s rd, 513 KiB/s wr, 31 op/s
Jan 31 00:00:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Jan 31 00:00:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Jan 31 00:00:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1649997090' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1649997090' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:00:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.0 KiB/s wr, 84 op/s
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4220699155' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:00:03 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4220699155' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:00:04 np0005603435 nova_compute[239938]: 2026-01-31 05:00:04.157 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:00:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/466474466' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:00:05 np0005603435 nova_compute[239938]: 2026-01-31 05:00:05.532 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.0 MiB/s wr, 105 op/s
Jan 31 00:00:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Jan 31 00:00:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Jan 31 00:00:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Jan 31 00:00:06 np0005603435 podman[270525]: 2026-01-31 05:00:06.08927467 +0000 UTC m=+0.053226551 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Jan 31 00:00:06 np0005603435 podman[270526]: 2026-01-31 05:00:06.134595046 +0000 UTC m=+0.091086325 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_05:00:06
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'vms']
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 00:00:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Jan 31 00:00:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Jan 31 00:00:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:00:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:00:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.6 MiB/s wr, 251 op/s
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:00:07 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:00:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 00:00:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:00:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:00:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:00:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:00:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:00:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:00:08 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:00:08 np0005603435 podman[270712]: 2026-01-31 05:00:08.267532411 +0000 UTC m=+0.068144704 container create 05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_darwin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 00:00:08 np0005603435 podman[270712]: 2026-01-31 05:00:08.228883288 +0000 UTC m=+0.029495421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:00:08 np0005603435 systemd[1]: Started libpod-conmon-05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2.scope.
Jan 31 00:00:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 00:00:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:00:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:00:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:00:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:00:08 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:00:08 np0005603435 podman[270712]: 2026-01-31 05:00:08.403823149 +0000 UTC m=+0.204435312 container init 05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_darwin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 00:00:08 np0005603435 podman[270712]: 2026-01-31 05:00:08.416096089 +0000 UTC m=+0.216708212 container start 05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_darwin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:00:08 np0005603435 romantic_darwin[270728]: 167 167
Jan 31 00:00:08 np0005603435 systemd[1]: libpod-05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2.scope: Deactivated successfully.
Jan 31 00:00:08 np0005603435 podman[270712]: 2026-01-31 05:00:08.438766212 +0000 UTC m=+0.239378405 container attach 05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_darwin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 00:00:08 np0005603435 podman[270712]: 2026-01-31 05:00:08.43949773 +0000 UTC m=+0.240109853 container died 05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 31 00:00:08 np0005603435 systemd[1]: var-lib-containers-storage-overlay-3809a85ba6220d7f6efde8d2ca0e213a8d68181415faba7be05d84daf9fa847c-merged.mount: Deactivated successfully.
Jan 31 00:00:08 np0005603435 podman[270712]: 2026-01-31 05:00:08.646445053 +0000 UTC m=+0.447057166 container remove 05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 00:00:08 np0005603435 systemd[1]: libpod-conmon-05ad91667447839f1f764f33e921af392319db510af4f97baaf244c04dc5d7e2.scope: Deactivated successfully.
Jan 31 00:00:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Jan 31 00:00:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Jan 31 00:00:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Jan 31 00:00:08 np0005603435 podman[270754]: 2026-01-31 05:00:08.883842287 +0000 UTC m=+0.099745405 container create 0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_stonebraker, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:00:08 np0005603435 podman[270754]: 2026-01-31 05:00:08.818600086 +0000 UTC m=+0.034503244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:00:09 np0005603435 systemd[1]: Started libpod-conmon-0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772.scope.
Jan 31 00:00:09 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:00:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7358843af035bc7b84f83c717face3ca4c7942b4540603a96d548530881012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7358843af035bc7b84f83c717face3ca4c7942b4540603a96d548530881012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7358843af035bc7b84f83c717face3ca4c7942b4540603a96d548530881012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7358843af035bc7b84f83c717face3ca4c7942b4540603a96d548530881012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:09 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7358843af035bc7b84f83c717face3ca4c7942b4540603a96d548530881012/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:09 np0005603435 nova_compute[239938]: 2026-01-31 05:00:09.162 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:09 np0005603435 podman[270754]: 2026-01-31 05:00:09.314172674 +0000 UTC m=+0.530075782 container init 0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:00:09 np0005603435 podman[270754]: 2026-01-31 05:00:09.323997774 +0000 UTC m=+0.539900892 container start 0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:00:09 np0005603435 podman[270754]: 2026-01-31 05:00:09.406883067 +0000 UTC m=+0.622786175 container attach 0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:00:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.6 MiB/s wr, 168 op/s
Jan 31 00:00:09 np0005603435 zealous_stonebraker[270771]: --> passed data devices: 0 physical, 3 LVM
Jan 31 00:00:09 np0005603435 zealous_stonebraker[270771]: --> All data devices are unavailable
Jan 31 00:00:09 np0005603435 systemd[1]: libpod-0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772.scope: Deactivated successfully.
Jan 31 00:00:09 np0005603435 podman[270754]: 2026-01-31 05:00:09.867405661 +0000 UTC m=+1.083308779 container died 0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 00:00:09 np0005603435 systemd[1]: var-lib-containers-storage-overlay-2d7358843af035bc7b84f83c717face3ca4c7942b4540603a96d548530881012-merged.mount: Deactivated successfully.
Jan 31 00:00:09 np0005603435 podman[270754]: 2026-01-31 05:00:09.926284118 +0000 UTC m=+1.142187206 container remove 0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:00:09 np0005603435 systemd[1]: libpod-conmon-0cba379d6bfd9df76fec94356038cb2578ddc75e02c1fb0d85bfbddff3d83772.scope: Deactivated successfully.
Jan 31 00:00:10 np0005603435 podman[270869]: 2026-01-31 05:00:10.376285175 +0000 UTC m=+0.058313555 container create 33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 00:00:10 np0005603435 systemd[1]: Started libpod-conmon-33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653.scope.
Jan 31 00:00:10 np0005603435 podman[270869]: 2026-01-31 05:00:10.350104996 +0000 UTC m=+0.032133416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:00:10 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:00:10 np0005603435 podman[270869]: 2026-01-31 05:00:10.460118402 +0000 UTC m=+0.142146822 container init 33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 00:00:10 np0005603435 podman[270869]: 2026-01-31 05:00:10.47026405 +0000 UTC m=+0.152292420 container start 33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 00:00:10 np0005603435 podman[270869]: 2026-01-31 05:00:10.47395714 +0000 UTC m=+0.155985570 container attach 33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 00:00:10 np0005603435 reverent_goldwasser[270885]: 167 167
Jan 31 00:00:10 np0005603435 systemd[1]: libpod-33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653.scope: Deactivated successfully.
Jan 31 00:00:10 np0005603435 podman[270869]: 2026-01-31 05:00:10.47561444 +0000 UTC m=+0.157642820 container died 33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 00:00:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-25e75a9ef1d1bdc95589dd729181561b30b9d9adcbaa6510b9939ad2637d69cd-merged.mount: Deactivated successfully.
Jan 31 00:00:10 np0005603435 podman[270869]: 2026-01-31 05:00:10.518358174 +0000 UTC m=+0.200386514 container remove 33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:00:10 np0005603435 systemd[1]: libpod-conmon-33d0c6e28812d1a1fe24e6a3ffb04a6ea071dba67b2cb3cbdba0dadc3341b653.scope: Deactivated successfully.
Jan 31 00:00:10 np0005603435 nova_compute[239938]: 2026-01-31 05:00:10.572 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:10 np0005603435 podman[270908]: 2026-01-31 05:00:10.679034337 +0000 UTC m=+0.047899121 container create 17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 00:00:10 np0005603435 systemd[1]: Started libpod-conmon-17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366.scope.
Jan 31 00:00:10 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:00:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f59ac33be8aac638cc812dd69e2f2b96b2df737f8103fc1f29db76d81dc2ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f59ac33be8aac638cc812dd69e2f2b96b2df737f8103fc1f29db76d81dc2ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f59ac33be8aac638cc812dd69e2f2b96b2df737f8103fc1f29db76d81dc2ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:10 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f59ac33be8aac638cc812dd69e2f2b96b2df737f8103fc1f29db76d81dc2ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:10 np0005603435 podman[270908]: 2026-01-31 05:00:10.661021057 +0000 UTC m=+0.029885851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:00:10 np0005603435 podman[270908]: 2026-01-31 05:00:10.778823743 +0000 UTC m=+0.147688527 container init 17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brown, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 00:00:10 np0005603435 podman[270908]: 2026-01-31 05:00:10.786604663 +0000 UTC m=+0.155469467 container start 17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 00:00:10 np0005603435 podman[270908]: 2026-01-31 05:00:10.790744364 +0000 UTC m=+0.159609148 container attach 17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brown, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 00:00:11 np0005603435 silly_brown[270925]: {
Jan 31 00:00:11 np0005603435 silly_brown[270925]:    "0": [
Jan 31 00:00:11 np0005603435 silly_brown[270925]:        {
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "devices": [
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "/dev/loop3"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            ],
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_name": "ceph_lv0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_size": "21470642176",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "name": "ceph_lv0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "tags": {
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cluster_name": "ceph",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.crush_device_class": "",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.encrypted": "0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.objectstore": "bluestore",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osd_id": "0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.type": "block",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.vdo": "0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.with_tpm": "0"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            },
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "type": "block",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "vg_name": "ceph_vg0"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:        }
Jan 31 00:00:11 np0005603435 silly_brown[270925]:    ],
Jan 31 00:00:11 np0005603435 silly_brown[270925]:    "1": [
Jan 31 00:00:11 np0005603435 silly_brown[270925]:        {
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "devices": [
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "/dev/loop4"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            ],
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_name": "ceph_lv1",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_size": "21470642176",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "name": "ceph_lv1",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "tags": {
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cluster_name": "ceph",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.crush_device_class": "",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.encrypted": "0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.objectstore": "bluestore",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osd_id": "1",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.type": "block",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.vdo": "0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.with_tpm": "0"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            },
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "type": "block",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "vg_name": "ceph_vg1"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:        }
Jan 31 00:00:11 np0005603435 silly_brown[270925]:    ],
Jan 31 00:00:11 np0005603435 silly_brown[270925]:    "2": [
Jan 31 00:00:11 np0005603435 silly_brown[270925]:        {
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "devices": [
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "/dev/loop5"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            ],
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_name": "ceph_lv2",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_size": "21470642176",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "name": "ceph_lv2",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "tags": {
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.cluster_name": "ceph",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.crush_device_class": "",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.encrypted": "0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.objectstore": "bluestore",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osd_id": "2",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.type": "block",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.vdo": "0",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:                "ceph.with_tpm": "0"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            },
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "type": "block",
Jan 31 00:00:11 np0005603435 silly_brown[270925]:            "vg_name": "ceph_vg2"
Jan 31 00:00:11 np0005603435 silly_brown[270925]:        }
Jan 31 00:00:11 np0005603435 silly_brown[270925]:    ]
Jan 31 00:00:11 np0005603435 silly_brown[270925]: }
Jan 31 00:00:11 np0005603435 systemd[1]: libpod-17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366.scope: Deactivated successfully.
Jan 31 00:00:11 np0005603435 podman[270908]: 2026-01-31 05:00:11.088267448 +0000 UTC m=+0.457132222 container died 17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 00:00:11 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a3f59ac33be8aac638cc812dd69e2f2b96b2df737f8103fc1f29db76d81dc2ad-merged.mount: Deactivated successfully.
Jan 31 00:00:11 np0005603435 podman[270908]: 2026-01-31 05:00:11.136847974 +0000 UTC m=+0.505712738 container remove 17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:00:11 np0005603435 systemd[1]: libpod-conmon-17405cfb84d8fc0fbf18943883ada232ca1eaa92a20ec1b679c36ff4cb2cb366.scope: Deactivated successfully.
Jan 31 00:00:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.0 MiB/s wr, 202 op/s
Jan 31 00:00:11 np0005603435 podman[271007]: 2026-01-31 05:00:11.555822633 +0000 UTC m=+0.039953946 container create fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 00:00:11 np0005603435 systemd[1]: Started libpod-conmon-fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4.scope.
Jan 31 00:00:11 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:00:11 np0005603435 podman[271007]: 2026-01-31 05:00:11.537812374 +0000 UTC m=+0.021943687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:00:11 np0005603435 podman[271007]: 2026-01-31 05:00:11.63554489 +0000 UTC m=+0.119676243 container init fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_newton, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 00:00:11 np0005603435 podman[271007]: 2026-01-31 05:00:11.641526616 +0000 UTC m=+0.125657919 container start fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 00:00:11 np0005603435 podman[271007]: 2026-01-31 05:00:11.6453959 +0000 UTC m=+0.129527273 container attach fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 00:00:11 np0005603435 sweet_newton[271023]: 167 167
Jan 31 00:00:11 np0005603435 systemd[1]: libpod-fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4.scope: Deactivated successfully.
Jan 31 00:00:11 np0005603435 podman[271007]: 2026-01-31 05:00:11.648024444 +0000 UTC m=+0.132155797 container died fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 00:00:11 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4d5648e611ce76e9a9948a02332a2f7cb78259c098aa64aac41b09abce5cdcc3-merged.mount: Deactivated successfully.
Jan 31 00:00:11 np0005603435 podman[271007]: 2026-01-31 05:00:11.692203063 +0000 UTC m=+0.176334386 container remove fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_newton, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:00:11 np0005603435 systemd[1]: libpod-conmon-fe2edeca27058165fdc8491cbbc86514ac89ad6ae0e65aadd23fd11c14a8d7a4.scope: Deactivated successfully.
Jan 31 00:00:11 np0005603435 podman[271046]: 2026-01-31 05:00:11.806401731 +0000 UTC m=+0.039677160 container create 8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mirzakhani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 00:00:11 np0005603435 systemd[1]: Started libpod-conmon-8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1.scope.
Jan 31 00:00:11 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:00:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3d88f943913b9a01e1d5ee0055f0bcf16985fff805e57381b1ec3baa84db9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3d88f943913b9a01e1d5ee0055f0bcf16985fff805e57381b1ec3baa84db9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3d88f943913b9a01e1d5ee0055f0bcf16985fff805e57381b1ec3baa84db9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:11 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3d88f943913b9a01e1d5ee0055f0bcf16985fff805e57381b1ec3baa84db9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:11 np0005603435 podman[271046]: 2026-01-31 05:00:11.873852538 +0000 UTC m=+0.107127977 container init 8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mirzakhani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 00:00:11 np0005603435 podman[271046]: 2026-01-31 05:00:11.784860015 +0000 UTC m=+0.018135524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:00:11 np0005603435 podman[271046]: 2026-01-31 05:00:11.880318166 +0000 UTC m=+0.113593585 container start 8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mirzakhani, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 00:00:11 np0005603435 podman[271046]: 2026-01-31 05:00:11.883556175 +0000 UTC m=+0.116831614 container attach 8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 00:00:12 np0005603435 lvm[271140]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 00:00:12 np0005603435 lvm[271140]: VG ceph_vg0 finished
Jan 31 00:00:12 np0005603435 lvm[271141]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 00:00:12 np0005603435 lvm[271141]: VG ceph_vg1 finished
Jan 31 00:00:12 np0005603435 lvm[271143]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 00:00:12 np0005603435 lvm[271143]: VG ceph_vg2 finished
Jan 31 00:00:12 np0005603435 lucid_mirzakhani[271062]: {}
Jan 31 00:00:12 np0005603435 systemd[1]: libpod-8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1.scope: Deactivated successfully.
Jan 31 00:00:12 np0005603435 systemd[1]: libpod-8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1.scope: Consumed 1.141s CPU time.
Jan 31 00:00:12 np0005603435 podman[271046]: 2026-01-31 05:00:12.656884945 +0000 UTC m=+0.890160404 container died 8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mirzakhani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 00:00:12 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a8e3d88f943913b9a01e1d5ee0055f0bcf16985fff805e57381b1ec3baa84db9-merged.mount: Deactivated successfully.
Jan 31 00:00:12 np0005603435 podman[271046]: 2026-01-31 05:00:12.708949426 +0000 UTC m=+0.942224895 container remove 8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 00:00:12 np0005603435 systemd[1]: libpod-conmon-8d69a6b41b0720535387a05beed7383999e07fe9fe0b11470030b94320bf56c1.scope: Deactivated successfully.
Jan 31 00:00:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 00:00:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:00:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 00:00:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:00:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:00:12 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:00:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 222 op/s
Jan 31 00:00:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.166 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.546 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.547 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.564 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.648 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.649 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.656 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.657 239942 INFO nova.compute.claims [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 00:00:14 np0005603435 nova_compute[239938]: 2026-01-31 05:00:14.767 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:00:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751225765' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.383 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.389 239942 DEBUG nova.compute.provider_tree [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.405 239942 DEBUG nova.scheduler.client.report [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.434 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.435 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.491 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.492 239942 DEBUG nova.network.neutron [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.514 239942 INFO nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 00:00:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 168 op/s
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.561 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.574 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.610 239942 INFO nova.virt.block_device [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Booting with volume 3306ded3-6afd-44dd-980c-42b24dc15410 at /dev/vdb#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.790 239942 DEBUG os_brick.utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.791 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.808 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.809 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[fa81b589-422f-485a-b59b-bca890c90b6e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.811 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.819 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.819 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[75fda91d-3516-40a0-ad32-82b7a3df5ff3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.821 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.830 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.830 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[308274c3-7827-4847-a4b6-a25bebc04550]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.832 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3472d6-f5cd-42ff-9d25-cb4a89bb7102]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.832 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.859 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.862 239942 DEBUG os_brick.initiator.connectors.lightos [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.862 239942 DEBUG os_brick.initiator.connectors.lightos [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.863 239942 DEBUG os_brick.initiator.connectors.lightos [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.863 239942 DEBUG os_brick.utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 00:00:15 np0005603435 nova_compute[239938]: 2026-01-31 05:00:15.864 239942 DEBUG nova.virt.block_device [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updating existing volume attachment record: 290c343f-6481-488e-b262-3ef856e365f6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 00:00:16 np0005603435 nova_compute[239938]: 2026-01-31 05:00:16.139 239942 DEBUG nova.policy [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2dc5826041a84e3897b017d9ad6bbe2c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f4019d294054f68b35b8f860129d22b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 00:00:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:00:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100742291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:00:16 np0005603435 nova_compute[239938]: 2026-01-31 05:00:16.965 239942 DEBUG nova.network.neutron [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Successfully created port: 27384399-6d62-46d0-a4c1-3ef6d37998a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.222 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.225 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.225 239942 INFO nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Creating image(s)#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.257 239942 DEBUG nova.storage.rbd_utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] rbd image e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.291 239942 DEBUG nova.storage.rbd_utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] rbd image e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.326 239942 DEBUG nova.storage.rbd_utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] rbd image e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.330 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.63105571429121e-06 of space, bias 1.0, pg target 0.0022893167142873628 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03404683159176568 of space, bias 1.0, pg target 10.214049477529704 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00034739392541337775 of space, bias 1.0, pg target 0.10074423836987954 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672872295057959 of space, bias 1.0, pg target 0.1935132965566808 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.257632222330538e-07 of space, bias 4.0, pg target 0.0009578853377903424 quantized to 16 (current 16)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.392 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.393 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.393 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.393 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.410 239942 DEBUG nova.storage.rbd_utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] rbd image e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.413 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 763 KiB/s rd, 892 KiB/s wr, 93 op/s
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.750 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.831 239942 DEBUG nova.storage.rbd_utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] resizing rbd image e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.871 239942 DEBUG nova.network.neutron [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Successfully updated port: 27384399-6d62-46d0-a4c1-3ef6d37998a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.935 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.936 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquired lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.936 239942 DEBUG nova.network.neutron [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.944 239942 DEBUG nova.objects.instance [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lazy-loading 'migration_context' on Instance uuid e9012993-27a3-4599-ba2e-d9f3ecf2551e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.964 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.964 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Ensure instance console log exists: /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.965 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.966 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:17 np0005603435 nova_compute[239938]: 2026-01-31 05:00:17.967 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:18 np0005603435 nova_compute[239938]: 2026-01-31 05:00:18.031 239942 DEBUG nova.compute.manager [req-1ffd7fba-b7db-4da4-a303-61997a71dd4c req-3a06f672-1b0f-4297-a963-c68da763799e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received event network-changed-27384399-6d62-46d0-a4c1-3ef6d37998a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:18 np0005603435 nova_compute[239938]: 2026-01-31 05:00:18.031 239942 DEBUG nova.compute.manager [req-1ffd7fba-b7db-4da4-a303-61997a71dd4c req-3a06f672-1b0f-4297-a963-c68da763799e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Refreshing instance network info cache due to event network-changed-27384399-6d62-46d0-a4c1-3ef6d37998a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 00:00:18 np0005603435 nova_compute[239938]: 2026-01-31 05:00:18.031 239942 DEBUG oslo_concurrency.lockutils [req-1ffd7fba-b7db-4da4-a303-61997a71dd4c req-3a06f672-1b0f-4297-a963-c68da763799e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:00:18 np0005603435 nova_compute[239938]: 2026-01-31 05:00:18.157 239942 DEBUG nova.network.neutron [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 00:00:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:19 np0005603435 nova_compute[239938]: 2026-01-31 05:00:19.170 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 713 KiB/s rd, 833 KiB/s wr, 86 op/s
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.187 239942 DEBUG nova.network.neutron [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updating instance_info_cache with network_info: [{"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.208 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Releasing lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.209 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Instance network_info: |[{"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.209 239942 DEBUG oslo_concurrency.lockutils [req-1ffd7fba-b7db-4da4-a303-61997a71dd4c req-3a06f672-1b0f-4297-a963-c68da763799e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.210 239942 DEBUG nova.network.neutron [req-1ffd7fba-b7db-4da4-a303-61997a71dd4c req-3a06f672-1b0f-4297-a963-c68da763799e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Refreshing network info cache for port 27384399-6d62-46d0-a4c1-3ef6d37998a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.217 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Start _get_guest_xml network_info=[{"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '290c343f-6481-488e-b262-3ef856e365f6', 'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': -1, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3306ded3-6afd-44dd-980c-42b24dc15410', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3306ded3-6afd-44dd-980c-42b24dc15410', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e9012993-27a3-4599-ba2e-d9f3ecf2551e', 'attached_at': '', 'detached_at': '', 'volume_id': '3306ded3-6afd-44dd-980c-42b24dc15410', 'serial': '3306ded3-6afd-44dd-980c-42b24dc15410'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.223 239942 WARNING nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.235 239942 DEBUG nova.virt.libvirt.host [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.236 239942 DEBUG nova.virt.libvirt.host [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.241 239942 DEBUG nova.virt.libvirt.host [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.242 239942 DEBUG nova.virt.libvirt.host [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.243 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.244 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.245 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.245 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.246 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.246 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.247 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.247 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.248 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.248 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.249 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.249 239942 DEBUG nova.virt.hardware [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.254 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.576 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:00:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145333336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.819 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.837 239942 DEBUG nova.storage.rbd_utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] rbd image e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:20 np0005603435 nova_compute[239938]: 2026-01-31 05:00:20.840 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:00:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602250934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.371 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.407 239942 DEBUG nova.virt.libvirt.vif [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T05:00:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1097842089',display_name='tempest-instance-1097842089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1097842089',id=26,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMD3YnigGV3CL4giyFwlCjNM/6UHSkzWnApipbiRasjH338Xq5rHDDpCVCLLdljitMAt2WDx7ntFxYCKGX5r1AqnkrymoA5QWnrZy5vEJiKAOFpbaaA7QlN8aHJto9IoCQ==',key_name='tempest-keypair-933048906',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f4019d294054f68b35b8f860129d22b',ramdisk_id='',reservation_id='r-r0ikr5re',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-24587387',owner_user_name='tempest-VolumesBackupsTest-24587387-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T05:00:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2dc5826041a84e3897b017d9ad6bbe2c',uuid=e9012993-27a3-4599-ba2e-d9f3ecf2551e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.408 239942 DEBUG nova.network.os_vif_util [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Converting VIF {"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.409 239942 DEBUG nova.network.os_vif_util [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:2a:e5,bridge_name='br-int',has_traffic_filtering=True,id=27384399-6d62-46d0-a4c1-3ef6d37998a7,network=Network(3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27384399-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.411 239942 DEBUG nova.objects.instance [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lazy-loading 'pci_devices' on Instance uuid e9012993-27a3-4599-ba2e-d9f3ecf2551e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.432 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <uuid>e9012993-27a3-4599-ba2e-d9f3ecf2551e</uuid>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <name>instance-0000001a</name>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <metadata>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <nova:name>tempest-instance-1097842089</nova:name>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 05:00:20</nova:creationTime>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <nova:user uuid="2dc5826041a84e3897b017d9ad6bbe2c">tempest-VolumesBackupsTest-24587387-project-member</nova:user>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <nova:project uuid="6f4019d294054f68b35b8f860129d22b">tempest-VolumesBackupsTest-24587387</nova:project>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <nova:port uuid="27384399-6d62-46d0-a4c1-3ef6d37998a7">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        </nova:port>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  </metadata>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <system>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <entry name="serial">e9012993-27a3-4599-ba2e-d9f3ecf2551e</entry>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <entry name="uuid">e9012993-27a3-4599-ba2e-d9f3ecf2551e</entry>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </system>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <os>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  </os>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <features>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <acpi/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <apic/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  </features>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  </clock>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  </cpu>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  <devices>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk.config">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-3306ded3-6afd-44dd-980c-42b24dc15410">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <target dev="vdb" bus="virtio"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <serial>3306ded3-6afd-44dd-980c-42b24dc15410</serial>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:86:2a:e5"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <target dev="tap27384399-6d"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </interface>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e/console.log" append="off"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </serial>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <video>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </video>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </rng>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 31 00:00:21 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:    </memballoon>
Jan 31 00:00:21 np0005603435 nova_compute[239938]:  </devices>
Jan 31 00:00:21 np0005603435 nova_compute[239938]: </domain>
Jan 31 00:00:21 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.434 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Preparing to wait for external event network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.434 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.435 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.435 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.436 239942 DEBUG nova.virt.libvirt.vif [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T05:00:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1097842089',display_name='tempest-instance-1097842089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1097842089',id=26,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMD3YnigGV3CL4giyFwlCjNM/6UHSkzWnApipbiRasjH338Xq5rHDDpCVCLLdljitMAt2WDx7ntFxYCKGX5r1AqnkrymoA5QWnrZy5vEJiKAOFpbaaA7QlN8aHJto9IoCQ==',key_name='tempest-keypair-933048906',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f4019d294054f68b35b8f860129d22b',ramdisk_id='',reservation_id='r-r0ikr5re',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-24587387',owner_user_name='tempest-VolumesBackupsTest-24587387-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T05:00:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2dc5826041a84e3897b017d9ad6bbe2c',uuid=e9012993-27a3-4599-ba2e-d9f3ecf2551e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.437 239942 DEBUG nova.network.os_vif_util [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Converting VIF {"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.438 239942 DEBUG nova.network.os_vif_util [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:2a:e5,bridge_name='br-int',has_traffic_filtering=True,id=27384399-6d62-46d0-a4c1-3ef6d37998a7,network=Network(3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27384399-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.438 239942 DEBUG os_vif [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:2a:e5,bridge_name='br-int',has_traffic_filtering=True,id=27384399-6d62-46d0-a4c1-3ef6d37998a7,network=Network(3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27384399-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.439 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.439 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.440 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.442 239942 DEBUG nova.network.neutron [req-1ffd7fba-b7db-4da4-a303-61997a71dd4c req-3a06f672-1b0f-4297-a963-c68da763799e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updated VIF entry in instance network info cache for port 27384399-6d62-46d0-a4c1-3ef6d37998a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.443 239942 DEBUG nova.network.neutron [req-1ffd7fba-b7db-4da4-a303-61997a71dd4c req-3a06f672-1b0f-4297-a963-c68da763799e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updating instance_info_cache with network_info: [{"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.444 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.445 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27384399-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.445 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27384399-6d, col_values=(('external_ids', {'iface-id': '27384399-6d62-46d0-a4c1-3ef6d37998a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:2a:e5', 'vm-uuid': 'e9012993-27a3-4599-ba2e-d9f3ecf2551e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.447 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:21 np0005603435 NetworkManager[49097]: <info>  [1769835621.4487] manager: (tap27384399-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.451 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.454 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.456 239942 INFO os_vif [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:2a:e5,bridge_name='br-int',has_traffic_filtering=True,id=27384399-6d62-46d0-a4c1-3ef6d37998a7,network=Network(3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27384399-6d')#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.500 239942 DEBUG oslo_concurrency.lockutils [req-1ffd7fba-b7db-4da4-a303-61997a71dd4c req-3a06f672-1b0f-4297-a963-c68da763799e c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.526 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.527 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.527 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.527 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] No VIF found with MAC fa:16:3e:86:2a:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.528 239942 INFO nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Using config drive#033[00m
Jan 31 00:00:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 643 KiB/s rd, 1.3 MiB/s wr, 89 op/s
Jan 31 00:00:21 np0005603435 nova_compute[239938]: 2026-01-31 05:00:21.561 239942 DEBUG nova.storage.rbd_utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] rbd image e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.458 239942 INFO nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Creating config drive at /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e/disk.config#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.466 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzrpfsnh4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.595 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzrpfsnh4" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.631 239942 DEBUG nova.storage.rbd_utils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] rbd image e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.635 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e/disk.config e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.772 239942 DEBUG oslo_concurrency.processutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e/disk.config e9012993-27a3-4599-ba2e-d9f3ecf2551e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.773 239942 INFO nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Deleting local config drive /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e/disk.config because it was imported into RBD.#033[00m
Jan 31 00:00:22 np0005603435 kernel: tap27384399-6d: entered promiscuous mode
Jan 31 00:00:22 np0005603435 NetworkManager[49097]: <info>  [1769835622.8311] manager: (tap27384399-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.830 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:22 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:22Z|00250|binding|INFO|Claiming lport 27384399-6d62-46d0-a4c1-3ef6d37998a7 for this chassis.
Jan 31 00:00:22 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:22Z|00251|binding|INFO|27384399-6d62-46d0-a4c1-3ef6d37998a7: Claiming fa:16:3e:86:2a:e5 10.100.0.12
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.835 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.837 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.850 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:2a:e5 10.100.0.12'], port_security=['fa:16:3e:86:2a:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e9012993-27a3-4599-ba2e-d9f3ecf2551e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f4019d294054f68b35b8f860129d22b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cce6ab71-eb98-451e-8f5c-5676889e02eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=383ad1c1-534f-47b8-ad27-6921a5514a36, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=27384399-6d62-46d0-a4c1-3ef6d37998a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.853 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 27384399-6d62-46d0-a4c1-3ef6d37998a7 in datapath 3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45 bound to our chassis#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.855 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.866 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5fbb1e23-b9c2-4cc3-be1f-aafbfb508689]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.867 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b2084dc-b1 in ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 00:00:22 np0005603435 systemd-machined[208030]: New machine qemu-26-instance-0000001a.
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.869 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b2084dc-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.870 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[78ffa210-ff68-4d81-af49-d7f9074a587d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.872 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[785c78e1-2fbb-496f-8fe5-03db691a5689]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:22 np0005603435 systemd-udevd[271515]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 00:00:22 np0005603435 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.887 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf7ac12-7330-42fc-b665-1240a876579c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.890 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:22 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:22Z|00252|binding|INFO|Setting lport 27384399-6d62-46d0-a4c1-3ef6d37998a7 ovn-installed in OVS
Jan 31 00:00:22 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:22Z|00253|binding|INFO|Setting lport 27384399-6d62-46d0-a4c1-3ef6d37998a7 up in Southbound
Jan 31 00:00:22 np0005603435 nova_compute[239938]: 2026-01-31 05:00:22.896 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:22 np0005603435 NetworkManager[49097]: <info>  [1769835622.9035] device (tap27384399-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.899 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[074d14c4-3929-4ffe-964c-7c213679472a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:22 np0005603435 NetworkManager[49097]: <info>  [1769835622.9053] device (tap27384399-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.932 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[251f9020-736f-4caf-8990-eb6f77788ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:22 np0005603435 NetworkManager[49097]: <info>  [1769835622.9409] manager: (tap3b2084dc-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.940 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[8257100b-7e4b-44a2-89b8-e151195d9949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.976 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[fd534a5b-cdcf-4917-ab4e-63563a8890e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:22 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:22.979 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9d3183-86c5-41b7-855e-929a3519a822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:23 np0005603435 NetworkManager[49097]: <info>  [1769835623.0016] device (tap3b2084dc-b0): carrier: link connected
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.006 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[e758add3-ebd0-4e40-9714-27ba1364dc85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.024 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[99fee7ce-ba3c-452a-af57-7a9142ad0ea8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b2084dc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:7d:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467740, 'reachable_time': 41248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271547, 'error': None, 'target': 'ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.036 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[710251f2-1713-48e5-9fb3-95f1d8ec8802]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:7dfe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467740, 'tstamp': 467740}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271548, 'error': None, 'target': 'ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.049 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1aca78e9-18f7-4f3a-882c-9fe91cb7015f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b2084dc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:7d:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467740, 'reachable_time': 41248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271549, 'error': None, 'target': 'ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.074 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7b72e129-8f9e-4675-8d65-44477557b190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.131 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[4c38eeb1-50b4-401e-99ed-22e9193c3952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.133 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b2084dc-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.134 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.135 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b2084dc-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:23 np0005603435 NetworkManager[49097]: <info>  [1769835623.1388] manager: (tap3b2084dc-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 31 00:00:23 np0005603435 kernel: tap3b2084dc-b0: entered promiscuous mode
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.138 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.147 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b2084dc-b0, col_values=(('external_ids', {'iface-id': '95aad675-1c3d-4885-9241-c9606839dca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:23 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:23Z|00254|binding|INFO|Releasing lport 95aad675-1c3d-4885-9241-c9606839dca2 from this chassis (sb_readonly=0)
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.149 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.150 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.154 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.161 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.160 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a036c47f-054f-4793-9100-16391331edf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.163 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: global
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45.pid.haproxy
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 00:00:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:23.167 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45', 'env', 'PROCESS_TAG=haproxy-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.496 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835623.4957573, e9012993-27a3-4599-ba2e-d9f3ecf2551e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.497 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] VM Started (Lifecycle Event)#033[00m
Jan 31 00:00:23 np0005603435 podman[271640]: 2026-01-31 05:00:23.519792066 +0000 UTC m=+0.048657629 container create ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.529 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.533 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835623.495927, e9012993-27a3-4599-ba2e-d9f3ecf2551e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.533 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] VM Paused (Lifecycle Event)#033[00m
Jan 31 00:00:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 42 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Jan 31 00:00:23 np0005603435 systemd[1]: Started libpod-conmon-ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146.scope.
Jan 31 00:00:23 np0005603435 podman[271640]: 2026-01-31 05:00:23.494358545 +0000 UTC m=+0.023224148 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.590 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:00:23 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.593 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 00:00:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63b80261362f8328c5fb7abbf796562e976e3c341cf9daa110a875313c364e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:23 np0005603435 podman[271640]: 2026-01-31 05:00:23.60639761 +0000 UTC m=+0.135263193 container init ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 00:00:23 np0005603435 podman[271640]: 2026-01-31 05:00:23.611104225 +0000 UTC m=+0.139969788 container start ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.611 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 00:00:23 np0005603435 neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45[271656]: [NOTICE]   (271660) : New worker (271662) forked
Jan 31 00:00:23 np0005603435 neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45[271656]: [NOTICE]   (271660) : Loading success.
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.716 239942 DEBUG nova.compute.manager [req-681058d6-4cff-41f8-8510-8e0a7342dfa6 req-3ae23b81-f03d-4c22-a931-330adf471a87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received event network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.716 239942 DEBUG oslo_concurrency.lockutils [req-681058d6-4cff-41f8-8510-8e0a7342dfa6 req-3ae23b81-f03d-4c22-a931-330adf471a87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.717 239942 DEBUG oslo_concurrency.lockutils [req-681058d6-4cff-41f8-8510-8e0a7342dfa6 req-3ae23b81-f03d-4c22-a931-330adf471a87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.717 239942 DEBUG oslo_concurrency.lockutils [req-681058d6-4cff-41f8-8510-8e0a7342dfa6 req-3ae23b81-f03d-4c22-a931-330adf471a87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.717 239942 DEBUG nova.compute.manager [req-681058d6-4cff-41f8-8510-8e0a7342dfa6 req-3ae23b81-f03d-4c22-a931-330adf471a87 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Processing event network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.718 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.722 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835623.7221115, e9012993-27a3-4599-ba2e-d9f3ecf2551e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.722 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] VM Resumed (Lifecycle Event)#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.724 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.728 239942 INFO nova.virt.libvirt.driver [-] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Instance spawned successfully.#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.728 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.747 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.756 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.760 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.761 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.762 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.763 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.764 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.764 239942 DEBUG nova.virt.libvirt.driver [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.794 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.832 239942 INFO nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Took 6.61 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.832 239942 DEBUG nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.890 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.895 239942 INFO nova.compute.manager [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Took 9.28 seconds to build instance.#033[00m
Jan 31 00:00:23 np0005603435 nova_compute[239938]: 2026-01-31 05:00:23.920 239942 DEBUG oslo_concurrency.lockutils [None req-0fa8e76c-77d5-4623-a0be-1129ea931090 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:24 np0005603435 nova_compute[239938]: 2026-01-31 05:00:24.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:24 np0005603435 nova_compute[239938]: 2026-01-31 05:00:24.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 723 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 00:00:25 np0005603435 nova_compute[239938]: 2026-01-31 05:00:25.578 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:25 np0005603435 nova_compute[239938]: 2026-01-31 05:00:25.819 239942 DEBUG nova.compute.manager [req-adf39f28-511b-43e0-be11-ca2004941048 req-7186839c-123f-4cf7-8603-7451bb442a17 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received event network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:25 np0005603435 nova_compute[239938]: 2026-01-31 05:00:25.820 239942 DEBUG oslo_concurrency.lockutils [req-adf39f28-511b-43e0-be11-ca2004941048 req-7186839c-123f-4cf7-8603-7451bb442a17 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:25 np0005603435 nova_compute[239938]: 2026-01-31 05:00:25.821 239942 DEBUG oslo_concurrency.lockutils [req-adf39f28-511b-43e0-be11-ca2004941048 req-7186839c-123f-4cf7-8603-7451bb442a17 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:25 np0005603435 nova_compute[239938]: 2026-01-31 05:00:25.821 239942 DEBUG oslo_concurrency.lockutils [req-adf39f28-511b-43e0-be11-ca2004941048 req-7186839c-123f-4cf7-8603-7451bb442a17 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:25 np0005603435 nova_compute[239938]: 2026-01-31 05:00:25.822 239942 DEBUG nova.compute.manager [req-adf39f28-511b-43e0-be11-ca2004941048 req-7186839c-123f-4cf7-8603-7451bb442a17 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] No waiting events found dispatching network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:00:25 np0005603435 nova_compute[239938]: 2026-01-31 05:00:25.822 239942 WARNING nova.compute.manager [req-adf39f28-511b-43e0-be11-ca2004941048 req-7186839c-123f-4cf7-8603-7451bb442a17 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received unexpected event network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 for instance with vm_state active and task_state None.#033[00m
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.010 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:26 np0005603435 NetworkManager[49097]: <info>  [1769835626.0140] manager: (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 31 00:00:26 np0005603435 NetworkManager[49097]: <info>  [1769835626.0154] manager: (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.070 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:26 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:26Z|00255|binding|INFO|Releasing lport 95aad675-1c3d-4885-9241-c9606839dca2 from this chassis (sb_readonly=0)
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.088 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.310 239942 DEBUG nova.compute.manager [req-237ef921-b7e2-4717-a8d0-029d99c65c62 req-67b0cc3f-953b-4298-b832-27d6d2c8fdf9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received event network-changed-27384399-6d62-46d0-a4c1-3ef6d37998a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.311 239942 DEBUG nova.compute.manager [req-237ef921-b7e2-4717-a8d0-029d99c65c62 req-67b0cc3f-953b-4298-b832-27d6d2c8fdf9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Refreshing instance network info cache due to event network-changed-27384399-6d62-46d0-a4c1-3ef6d37998a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.311 239942 DEBUG oslo_concurrency.lockutils [req-237ef921-b7e2-4717-a8d0-029d99c65c62 req-67b0cc3f-953b-4298-b832-27d6d2c8fdf9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.312 239942 DEBUG oslo_concurrency.lockutils [req-237ef921-b7e2-4717-a8d0-029d99c65c62 req-67b0cc3f-953b-4298-b832-27d6d2c8fdf9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.312 239942 DEBUG nova.network.neutron [req-237ef921-b7e2-4717-a8d0-029d99c65c62 req-67b0cc3f-953b-4298-b832-27d6d2c8fdf9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Refreshing network info cache for port 27384399-6d62-46d0-a4c1-3ef6d37998a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.447 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:26 np0005603435 nova_compute[239938]: 2026-01-31 05:00:26.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Jan 31 00:00:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Jan 31 00:00:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Jan 31 00:00:27 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Jan 31 00:00:27 np0005603435 nova_compute[239938]: 2026-01-31 05:00:27.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:27 np0005603435 nova_compute[239938]: 2026-01-31 05:00:27.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 00:00:27 np0005603435 nova_compute[239938]: 2026-01-31 05:00:27.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 00:00:27 np0005603435 nova_compute[239938]: 2026-01-31 05:00:27.928 239942 DEBUG nova.network.neutron [req-237ef921-b7e2-4717-a8d0-029d99c65c62 req-67b0cc3f-953b-4298-b832-27d6d2c8fdf9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updated VIF entry in instance network info cache for port 27384399-6d62-46d0-a4c1-3ef6d37998a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 00:00:27 np0005603435 nova_compute[239938]: 2026-01-31 05:00:27.929 239942 DEBUG nova.network.neutron [req-237ef921-b7e2-4717-a8d0-029d99c65c62 req-67b0cc3f-953b-4298-b832-27d6d2c8fdf9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updating instance_info_cache with network_info: [{"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:00:27 np0005603435 nova_compute[239938]: 2026-01-31 05:00:27.977 239942 DEBUG oslo_concurrency.lockutils [req-237ef921-b7e2-4717-a8d0-029d99c65c62 req-67b0cc3f-953b-4298-b832-27d6d2c8fdf9 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:00:28 np0005603435 nova_compute[239938]: 2026-01-31 05:00:28.438 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:00:28 np0005603435 nova_compute[239938]: 2026-01-31 05:00:28.438 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquired lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:00:28 np0005603435 nova_compute[239938]: 2026-01-31 05:00:28.439 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 00:00:28 np0005603435 nova_compute[239938]: 2026-01-31 05:00:28.439 239942 DEBUG nova.objects.instance [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e9012993-27a3-4599-ba2e-d9f3ecf2551e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:00:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:00:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786757186' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:00:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.639 239942 DEBUG nova.network.neutron [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updating instance_info_cache with network_info: [{"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.656 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Releasing lock "refresh_cache-e9012993-27a3-4599-ba2e-d9f3ecf2551e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.657 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.658 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.658 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.658 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.706 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.707 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.707 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.707 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 00:00:29 np0005603435 nova_compute[239938]: 2026-01-31 05:00:29.708 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:00:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2819641148' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.275 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.346 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.347 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.347 239942 DEBUG nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.562 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.564 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4244MB free_disk=59.96688734181225GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.565 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.565 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.617 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.667 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Instance e9012993-27a3-4599-ba2e-d9f3ecf2551e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.667 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.668 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 00:00:30 np0005603435 nova_compute[239938]: 2026-01-31 05:00:30.704 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:00:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4059806248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:00:31 np0005603435 nova_compute[239938]: 2026-01-31 05:00:31.278 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:31 np0005603435 nova_compute[239938]: 2026-01-31 05:00:31.285 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:00:31 np0005603435 nova_compute[239938]: 2026-01-31 05:00:31.304 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:00:31 np0005603435 nova_compute[239938]: 2026-01-31 05:00:31.333 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 00:00:31 np0005603435 nova_compute[239938]: 2026-01-31 05:00:31.333 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:31 np0005603435 nova_compute[239938]: 2026-01-31 05:00:31.451 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 118 op/s
Jan 31 00:00:31 np0005603435 nova_compute[239938]: 2026-01-31 05:00:31.563 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:31 np0005603435 nova_compute[239938]: 2026-01-31 05:00:31.583 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/323302597' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/323302597' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:00:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 45 KiB/s wr, 127 op/s
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.784941) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835633784975, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1033, "num_deletes": 255, "total_data_size": 1266247, "memory_usage": 1291592, "flush_reason": "Manual Compaction"}
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835633794274, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 916841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34052, "largest_seqno": 35084, "table_properties": {"data_size": 912265, "index_size": 2100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11805, "raw_average_key_size": 21, "raw_value_size": 902546, "raw_average_value_size": 1643, "num_data_blocks": 92, "num_entries": 549, "num_filter_entries": 549, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769835572, "oldest_key_time": 1769835572, "file_creation_time": 1769835633, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 9385 microseconds, and 2677 cpu microseconds.
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.794321) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 916841 bytes OK
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.794341) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.795803) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.795819) EVENT_LOG_v1 {"time_micros": 1769835633795814, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.795836) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1261257, prev total WAL file size 1261257, number of live WAL files 2.
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.796368) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303038' seq:72057594037927935, type:22 .. '6D6772737461740031323630' seq:0, type:0; will stop at (end)
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(895KB)], [68(11MB)]
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835633796418, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 13076450, "oldest_snapshot_seqno": -1}
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6731 keys, 10051520 bytes, temperature: kUnknown
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835633884476, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 10051520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10001807, "index_size": 31809, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 168719, "raw_average_key_size": 25, "raw_value_size": 9876190, "raw_average_value_size": 1467, "num_data_blocks": 1275, "num_entries": 6731, "num_filter_entries": 6731, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769835633, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.884801) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 10051520 bytes
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.886257) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.3 rd, 114.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.6 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(25.2) write-amplify(11.0) OK, records in: 7232, records dropped: 501 output_compression: NoCompression
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.886284) EVENT_LOG_v1 {"time_micros": 1769835633886271, "job": 38, "event": "compaction_finished", "compaction_time_micros": 88195, "compaction_time_cpu_micros": 33206, "output_level": 6, "num_output_files": 1, "total_output_size": 10051520, "num_input_records": 7232, "num_output_records": 6731, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835633886563, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835633887564, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.796244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.887684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.887690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.887692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.887694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:00:33 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:00:33.887697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:00:34 np0005603435 nova_compute[239938]: 2026-01-31 05:00:34.766 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:34 np0005603435 nova_compute[239938]: 2026-01-31 05:00:34.767 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:34 np0005603435 nova_compute[239938]: 2026-01-31 05:00:34.784 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 00:00:34 np0005603435 nova_compute[239938]: 2026-01-31 05:00:34.844 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:34 np0005603435 nova_compute[239938]: 2026-01-31 05:00:34.845 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:34 np0005603435 nova_compute[239938]: 2026-01-31 05:00:34.855 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 00:00:34 np0005603435 nova_compute[239938]: 2026-01-31 05:00:34.855 239942 INFO nova.compute.claims [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 00:00:34 np0005603435 nova_compute[239938]: 2026-01-31 05:00:34.970 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.5 MiB/s rd, 565 KiB/s wr, 99 op/s
Jan 31 00:00:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:00:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568791716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.587 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.592 239942 DEBUG nova.compute.provider_tree [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.608 239942 DEBUG nova.scheduler.client.report [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.618 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.737 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.738 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.795 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.796 239942 DEBUG nova.network.neutron [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.823 239942 INFO nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.843 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.974 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.975 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.975 239942 INFO nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Creating image(s)#033[00m
Jan 31 00:00:35 np0005603435 nova_compute[239938]: 2026-01-31 05:00:35.991 239942 DEBUG nova.storage.rbd_utils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.009 239942 DEBUG nova.storage.rbd_utils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.027 239942 DEBUG nova.storage.rbd_utils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.029 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.042 239942 DEBUG nova.policy [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6784d92c92b24526a302a1a74a813c76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48935f8745744c4ba5400c13f80e0379', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.077 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.077 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.078 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.078 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "0ecaba2dd71d25ad0ace076a5082d46b255107c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.099 239942 DEBUG nova.storage.rbd_utils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.104 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.429 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0ecaba2dd71d25ad0ace076a5082d46b255107c4 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.471 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.519 239942 DEBUG nova.storage.rbd_utils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] resizing rbd image 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.605 239942 DEBUG nova.objects.instance [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'migration_context' on Instance uuid 79e4d808-e888-48d3-8b42-c6e0d9350d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.628 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.629 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Ensure instance console log exists: /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.630 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.630 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.631 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:36 np0005603435 nova_compute[239938]: 2026-01-31 05:00:36.713 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:36.713 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:00:36 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:36.717 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 00:00:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:00:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:00:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:00:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:00:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:00:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:00:37 np0005603435 podman[271905]: 2026-01-31 05:00:37.132017964 +0000 UTC m=+0.096716171 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 00:00:37 np0005603435 podman[271906]: 2026-01-31 05:00:37.140829545 +0000 UTC m=+0.105442100 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 00:00:37 np0005603435 nova_compute[239938]: 2026-01-31 05:00:37.451 239942 DEBUG nova.network.neutron [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Successfully created port: 09b730e5-cc74-4a8e-894c-91cd51072e1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 00:00:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 418 KiB/s rd, 3.3 MiB/s wr, 132 op/s
Jan 31 00:00:37 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:37Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:2a:e5 10.100.0.12
Jan 31 00:00:37 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:37Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:2a:e5 10.100.0.12
Jan 31 00:00:38 np0005603435 nova_compute[239938]: 2026-01-31 05:00:38.705 239942 DEBUG nova.network.neutron [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Successfully updated port: 09b730e5-cc74-4a8e-894c-91cd51072e1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 00:00:38 np0005603435 nova_compute[239938]: 2026-01-31 05:00:38.736 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:00:38 np0005603435 nova_compute[239938]: 2026-01-31 05:00:38.736 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquired lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:00:38 np0005603435 nova_compute[239938]: 2026-01-31 05:00:38.737 239942 DEBUG nova.network.neutron [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 00:00:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:38 np0005603435 nova_compute[239938]: 2026-01-31 05:00:38.787 239942 DEBUG nova.compute.manager [req-197e4cb1-3e53-4da6-9fb9-dbd84d5d4639 req-a15e5ff3-626f-4aec-bbdf-b3f436de5a34 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received event network-changed-09b730e5-cc74-4a8e-894c-91cd51072e1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:38 np0005603435 nova_compute[239938]: 2026-01-31 05:00:38.788 239942 DEBUG nova.compute.manager [req-197e4cb1-3e53-4da6-9fb9-dbd84d5d4639 req-a15e5ff3-626f-4aec-bbdf-b3f436de5a34 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Refreshing instance network info cache due to event network-changed-09b730e5-cc74-4a8e-894c-91cd51072e1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 00:00:38 np0005603435 nova_compute[239938]: 2026-01-31 05:00:38.789 239942 DEBUG oslo_concurrency.lockutils [req-197e4cb1-3e53-4da6-9fb9-dbd84d5d4639 req-a15e5ff3-626f-4aec-bbdf-b3f436de5a34 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:00:38 np0005603435 nova_compute[239938]: 2026-01-31 05:00:38.923 239942 DEBUG nova.network.neutron [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 00:00:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 353 KiB/s rd, 2.8 MiB/s wr, 111 op/s
Jan 31 00:00:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:39.720 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.004 239942 DEBUG nova.network.neutron [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Updating instance_info_cache with network_info: [{"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.037 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Releasing lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.038 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Instance network_info: |[{"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.038 239942 DEBUG oslo_concurrency.lockutils [req-197e4cb1-3e53-4da6-9fb9-dbd84d5d4639 req-a15e5ff3-626f-4aec-bbdf-b3f436de5a34 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.039 239942 DEBUG nova.network.neutron [req-197e4cb1-3e53-4da6-9fb9-dbd84d5d4639 req-a15e5ff3-626f-4aec-bbdf-b3f436de5a34 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Refreshing network info cache for port 09b730e5-cc74-4a8e-894c-91cd51072e1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.043 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Start _get_guest_xml network_info=[{"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'guest_format': None, 'image_id': 'bf004ad8-fb70-4caa-9170-9f02e22d687d'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.051 239942 WARNING nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.056 239942 DEBUG nova.virt.libvirt.host [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.057 239942 DEBUG nova.virt.libvirt.host [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.063 239942 DEBUG nova.virt.libvirt.host [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.063 239942 DEBUG nova.virt.libvirt.host [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.064 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.064 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T04:42:03Z,direct_url=<?>,disk_format='qcow2',id=bf004ad8-fb70-4caa-9170-9f02e22d687d,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f21373613e7f47d4b3c503ffba1fa3a6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T04:42:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.065 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.066 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.066 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.067 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.067 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.067 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.068 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.068 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.069 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.069 239942 DEBUG nova.virt.hardware [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.073 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:40 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:00:40 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728007047' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.658 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.671 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.692 239942 DEBUG nova.storage.rbd_utils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:40 np0005603435 nova_compute[239938]: 2026-01-31 05:00:40.696 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:00:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611153402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.277 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.280 239942 DEBUG nova.virt.libvirt.vif [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T05:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2028458698',display_name='tempest-TestEncryptedCinderVolumes-server-2028458698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2028458698',id=27,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4WqnO4wVm4Dct+29WZNqsQJvDZ7+oMnvkdZxGKMg53aAhI8Wpy9rzJCw1uDdLfABmpfltRhDa933aDbvtyuE/HbkfaGwe1QUgyVtWz6jiDO3dH5hSEqs/4G0+tuU1raw==',key_name='tempest-keypair-1940583653',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-kffugcm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T05:00:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6784d92c92b24526a302a1a74a813c76',uuid=79e4d808-e888-48d3-8b42-c6e0d9350d37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.280 239942 DEBUG nova.network.os_vif_util [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.282 239942 DEBUG nova.network.os_vif_util [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:f4:2b,bridge_name='br-int',has_traffic_filtering=True,id=09b730e5-cc74-4a8e-894c-91cd51072e1f,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09b730e5-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.284 239942 DEBUG nova.objects.instance [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'pci_devices' on Instance uuid 79e4d808-e888-48d3-8b42-c6e0d9350d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.328 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] End _get_guest_xml xml=<domain type="kvm">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <uuid>79e4d808-e888-48d3-8b42-c6e0d9350d37</uuid>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <name>instance-0000001b</name>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <metadata>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-2028458698</nova:name>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 05:00:40</nova:creationTime>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <nova:user uuid="6784d92c92b24526a302a1a74a813c76">tempest-TestEncryptedCinderVolumes-1466370108-project-member</nova:user>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <nova:project uuid="48935f8745744c4ba5400c13f80e0379">tempest-TestEncryptedCinderVolumes-1466370108</nova:project>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <nova:root type="image" uuid="bf004ad8-fb70-4caa-9170-9f02e22d687d"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <nova:port uuid="09b730e5-cc74-4a8e-894c-91cd51072e1f">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        </nova:port>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  </metadata>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <system>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <entry name="serial">79e4d808-e888-48d3-8b42-c6e0d9350d37</entry>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <entry name="uuid">79e4d808-e888-48d3-8b42-c6e0d9350d37</entry>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </system>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <os>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  </os>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <features>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <acpi/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <apic/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  </features>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  </clock>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  </cpu>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  <devices>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/79e4d808-e888-48d3-8b42-c6e0d9350d37_disk">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/79e4d808-e888-48d3-8b42-c6e0d9350d37_disk.config">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:85:f4:2b"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <target dev="tap09b730e5-cc"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </interface>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37/console.log" append="off"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </serial>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <video>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </video>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </rng>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 31 00:00:41 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:    </memballoon>
Jan 31 00:00:41 np0005603435 nova_compute[239938]:  </devices>
Jan 31 00:00:41 np0005603435 nova_compute[239938]: </domain>
Jan 31 00:00:41 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.329 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Preparing to wait for external event network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.330 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.331 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.331 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.332 239942 DEBUG nova.virt.libvirt.vif [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T05:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2028458698',display_name='tempest-TestEncryptedCinderVolumes-server-2028458698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2028458698',id=27,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4WqnO4wVm4Dct+29WZNqsQJvDZ7+oMnvkdZxGKMg53aAhI8Wpy9rzJCw1uDdLfABmpfltRhDa933aDbvtyuE/HbkfaGwe1QUgyVtWz6jiDO3dH5hSEqs/4G0+tuU1raw==',key_name='tempest-keypair-1940583653',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-kffugcm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T05:00:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6784d92c92b24526a302a1a74a813c76',uuid=79e4d808-e888-48d3-8b42-c6e0d9350d37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.332 239942 DEBUG nova.network.os_vif_util [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.333 239942 DEBUG nova.network.os_vif_util [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:f4:2b,bridge_name='br-int',has_traffic_filtering=True,id=09b730e5-cc74-4a8e-894c-91cd51072e1f,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09b730e5-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.333 239942 DEBUG os_vif [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:f4:2b,bridge_name='br-int',has_traffic_filtering=True,id=09b730e5-cc74-4a8e-894c-91cd51072e1f,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09b730e5-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.334 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.335 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.335 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.339 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.339 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09b730e5-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.340 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09b730e5-cc, col_values=(('external_ids', {'iface-id': '09b730e5-cc74-4a8e-894c-91cd51072e1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:f4:2b', 'vm-uuid': '79e4d808-e888-48d3-8b42-c6e0d9350d37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.341 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:41 np0005603435 NetworkManager[49097]: <info>  [1769835641.3425] manager: (tap09b730e5-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.345 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.349 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.350 239942 INFO os_vif [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:f4:2b,bridge_name='br-int',has_traffic_filtering=True,id=09b730e5-cc74-4a8e-894c-91cd51072e1f,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09b730e5-cc')#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.456 239942 DEBUG nova.network.neutron [req-197e4cb1-3e53-4da6-9fb9-dbd84d5d4639 req-a15e5ff3-626f-4aec-bbdf-b3f436de5a34 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Updated VIF entry in instance network info cache for port 09b730e5-cc74-4a8e-894c-91cd51072e1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.457 239942 DEBUG nova.network.neutron [req-197e4cb1-3e53-4da6-9fb9-dbd84d5d4639 req-a15e5ff3-626f-4aec-bbdf-b3f436de5a34 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Updating instance_info_cache with network_info: [{"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.478 239942 DEBUG oslo_concurrency.lockutils [req-197e4cb1-3e53-4da6-9fb9-dbd84d5d4639 req-a15e5ff3-626f-4aec-bbdf-b3f436de5a34 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.503 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.504 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.504 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No VIF found with MAC fa:16:3e:85:f4:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.505 239942 INFO nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Using config drive#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.533 239942 DEBUG nova.storage.rbd_utils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 511 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.858 239942 INFO nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Creating config drive at /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37/disk.config#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.866 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdjf984xd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:41 np0005603435 nova_compute[239938]: 2026-01-31 05:00:41.997 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdjf984xd" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.033 239942 DEBUG nova.storage.rbd_utils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.038 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37/disk.config 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.157 239942 DEBUG oslo_concurrency.processutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37/disk.config 79e4d808-e888-48d3-8b42-c6e0d9350d37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.158 239942 INFO nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Deleting local config drive /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37/disk.config because it was imported into RBD.#033[00m
Jan 31 00:00:42 np0005603435 kernel: tap09b730e5-cc: entered promiscuous mode
Jan 31 00:00:42 np0005603435 NetworkManager[49097]: <info>  [1769835642.2087] manager: (tap09b730e5-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.212 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:42 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:42Z|00256|binding|INFO|Claiming lport 09b730e5-cc74-4a8e-894c-91cd51072e1f for this chassis.
Jan 31 00:00:42 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:42Z|00257|binding|INFO|09b730e5-cc74-4a8e-894c-91cd51072e1f: Claiming fa:16:3e:85:f4:2b 10.100.0.9
Jan 31 00:00:42 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:42Z|00258|binding|INFO|Setting lport 09b730e5-cc74-4a8e-894c-91cd51072e1f ovn-installed in OVS
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.229 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.233 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:42 np0005603435 systemd-machined[208030]: New machine qemu-27-instance-0000001b.
Jan 31 00:00:42 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:42Z|00259|binding|INFO|Setting lport 09b730e5-cc74-4a8e-894c-91cd51072e1f up in Southbound
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.255 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:f4:2b 10.100.0.9'], port_security=['fa:16:3e:85:f4:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '79e4d808-e888-48d3-8b42-c6e0d9350d37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f25b83f-b794-417e-88e7-d89c680f473d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48935f8745744c4ba5400c13f80e0379', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bff456ce-01a2-4b10-8073-b174ddc2a585', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94c57d33-0e3a-4b86-87cd-ae1ca9bb064d, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=09b730e5-cc74-4a8e-894c-91cd51072e1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.258 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 09b730e5-cc74-4a8e-894c-91cd51072e1f in datapath 2f25b83f-b794-417e-88e7-d89c680f473d bound to our chassis#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.261 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f25b83f-b794-417e-88e7-d89c680f473d#033[00m
Jan 31 00:00:42 np0005603435 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Jan 31 00:00:42 np0005603435 systemd-udevd[272085]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.272 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6ff1aa-791e-4a8a-8cb7-89e92d502930]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.273 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f25b83f-b1 in ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.275 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f25b83f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.276 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5b709a-9632-4a1f-a7f0-c3afac125e8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.277 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5903f0-a1c2-4adf-94e2-e2a62662a9e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 NetworkManager[49097]: <info>  [1769835642.2882] device (tap09b730e5-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 00:00:42 np0005603435 NetworkManager[49097]: <info>  [1769835642.2895] device (tap09b730e5-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.290 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[95a36e59-9319-4052-8bc6-d33265749296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.301 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[eff1b1e0-a0f2-46c4-b92b-ad1993873b13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.322 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb20a08-8200-48d2-8a25-fcbff2dc36a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 NetworkManager[49097]: <info>  [1769835642.3321] manager: (tap2f25b83f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.331 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb92cf0-d607-4422-9004-0bcd1b0d0987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 systemd-udevd[272088]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.366 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[4c496c2a-acb5-4c09-a8dc-d756f6e5a5d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.370 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[bad55130-c694-4832-9d6f-0506c91f7e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 NetworkManager[49097]: <info>  [1769835642.3856] device (tap2f25b83f-b0): carrier: link connected
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.389 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[258873d0-4f33-47e8-a69f-d7b9efaf5c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.401 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2be28df5-6f61-4abc-9dfb-688f0991aa06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f25b83f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:19:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469678, 'reachable_time': 44716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272119, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.412 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee2072e-f556-4f07-b735-ef0f084a1520]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:1905'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469678, 'tstamp': 469678}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272120, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.427 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6e73a8-0877-41f6-a749-bde38a494cae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f25b83f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:19:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469678, 'reachable_time': 44716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272121, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.450 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[0b857312-757b-4c0f-9237-ecae7b9c0e88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.487 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[04d55997-35b6-415c-a2c5-ebd688deb434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.488 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f25b83f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.488 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.489 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f25b83f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:42 np0005603435 kernel: tap2f25b83f-b0: entered promiscuous mode
Jan 31 00:00:42 np0005603435 NetworkManager[49097]: <info>  [1769835642.4920] manager: (tap2f25b83f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.491 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.495 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f25b83f-b0, col_values=(('external_ids', {'iface-id': '9bf21700-cf87-40d9-96a1-5af6970f25f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:42 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:42Z|00260|binding|INFO|Releasing lport 9bf21700-cf87-40d9-96a1-5af6970f25f7 from this chassis (sb_readonly=0)
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.497 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.498 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1b873df8-f69c-4435-a9e9-4b2354ce3bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.498 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: global
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-2f25b83f-b794-417e-88e7-d89c680f473d
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 2f25b83f-b794-417e-88e7-d89c680f473d
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 00:00:42 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:42.499 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'env', 'PROCESS_TAG=haproxy-2f25b83f-b794-417e-88e7-d89c680f473d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f25b83f-b794-417e-88e7-d89c680f473d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.506 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.608 239942 DEBUG nova.compute.manager [req-ae134a13-2f8b-429b-8089-60788cf8a9cc req-44a9526b-51e5-42cb-9940-dd219ea48722 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received event network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.609 239942 DEBUG oslo_concurrency.lockutils [req-ae134a13-2f8b-429b-8089-60788cf8a9cc req-44a9526b-51e5-42cb-9940-dd219ea48722 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.609 239942 DEBUG oslo_concurrency.lockutils [req-ae134a13-2f8b-429b-8089-60788cf8a9cc req-44a9526b-51e5-42cb-9940-dd219ea48722 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.609 239942 DEBUG oslo_concurrency.lockutils [req-ae134a13-2f8b-429b-8089-60788cf8a9cc req-44a9526b-51e5-42cb-9940-dd219ea48722 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.610 239942 DEBUG nova.compute.manager [req-ae134a13-2f8b-429b-8089-60788cf8a9cc req-44a9526b-51e5-42cb-9940-dd219ea48722 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Processing event network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.882 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835642.8821583, 79e4d808-e888-48d3-8b42-c6e0d9350d37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.883 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] VM Started (Lifecycle Event)#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.887 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.892 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 00:00:42 np0005603435 podman[272194]: 2026-01-31 05:00:42.896080662 +0000 UTC m=+0.073967736 container create 8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.896 239942 INFO nova.virt.libvirt.driver [-] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Instance spawned successfully.#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.897 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.933 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:00:42 np0005603435 systemd[1]: Started libpod-conmon-8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870.scope.
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.943 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.948 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.949 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.950 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.950 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:42 np0005603435 podman[272194]: 2026-01-31 05:00:42.85809051 +0000 UTC m=+0.035977584 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.951 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.952 239942 DEBUG nova.virt.libvirt.driver [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:00:42 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.963 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.964 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835642.8824458, 79e4d808-e888-48d3-8b42-c6e0d9350d37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.964 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] VM Paused (Lifecycle Event)#033[00m
Jan 31 00:00:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338b9f0aa9827daff8c59f62f4a36b319766225c3e9bb7db91a4b4eb4eceaee7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 00:00:42 np0005603435 podman[272194]: 2026-01-31 05:00:42.97936123 +0000 UTC m=+0.157248314 container init 8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 00:00:42 np0005603435 podman[272194]: 2026-01-31 05:00:42.984254217 +0000 UTC m=+0.162141291 container start 8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.986 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.990 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835642.8906016, 79e4d808-e888-48d3-8b42-c6e0d9350d37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:00:42 np0005603435 nova_compute[239938]: 2026-01-31 05:00:42.990 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] VM Resumed (Lifecycle Event)#033[00m
Jan 31 00:00:43 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[272211]: [NOTICE]   (272215) : New worker (272217) forked
Jan 31 00:00:43 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[272211]: [NOTICE]   (272215) : Loading success.
Jan 31 00:00:43 np0005603435 nova_compute[239938]: 2026-01-31 05:00:43.007 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:00:43 np0005603435 nova_compute[239938]: 2026-01-31 05:00:43.010 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 00:00:43 np0005603435 nova_compute[239938]: 2026-01-31 05:00:43.014 239942 INFO nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Took 7.04 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 00:00:43 np0005603435 nova_compute[239938]: 2026-01-31 05:00:43.014 239942 DEBUG nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:00:43 np0005603435 nova_compute[239938]: 2026-01-31 05:00:43.050 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 00:00:43 np0005603435 nova_compute[239938]: 2026-01-31 05:00:43.097 239942 INFO nova.compute.manager [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Took 8.27 seconds to build instance.#033[00m
Jan 31 00:00:43 np0005603435 nova_compute[239938]: 2026-01-31 05:00:43.116 239942 DEBUG oslo_concurrency.lockutils [None req-23850b86-15d6-450a-958c-4f0d058d4f58 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 516 KiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 31 00:00:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:44 np0005603435 nova_compute[239938]: 2026-01-31 05:00:44.695 239942 DEBUG nova.compute.manager [req-0ac09f22-56e3-4f27-943a-0895ec243a3c req-057821e1-2afc-486d-a544-16ffa6cb78f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received event network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:44 np0005603435 nova_compute[239938]: 2026-01-31 05:00:44.696 239942 DEBUG oslo_concurrency.lockutils [req-0ac09f22-56e3-4f27-943a-0895ec243a3c req-057821e1-2afc-486d-a544-16ffa6cb78f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:44 np0005603435 nova_compute[239938]: 2026-01-31 05:00:44.697 239942 DEBUG oslo_concurrency.lockutils [req-0ac09f22-56e3-4f27-943a-0895ec243a3c req-057821e1-2afc-486d-a544-16ffa6cb78f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:44 np0005603435 nova_compute[239938]: 2026-01-31 05:00:44.697 239942 DEBUG oslo_concurrency.lockutils [req-0ac09f22-56e3-4f27-943a-0895ec243a3c req-057821e1-2afc-486d-a544-16ffa6cb78f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:44 np0005603435 nova_compute[239938]: 2026-01-31 05:00:44.697 239942 DEBUG nova.compute.manager [req-0ac09f22-56e3-4f27-943a-0895ec243a3c req-057821e1-2afc-486d-a544-16ffa6cb78f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] No waiting events found dispatching network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:00:44 np0005603435 nova_compute[239938]: 2026-01-31 05:00:44.697 239942 WARNING nova.compute.manager [req-0ac09f22-56e3-4f27-943a-0895ec243a3c req-057821e1-2afc-486d-a544-16ffa6cb78f7 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received unexpected event network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f for instance with vm_state active and task_state None.#033[00m
Jan 31 00:00:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 31 00:00:45 np0005603435 nova_compute[239938]: 2026-01-31 05:00:45.661 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:46 np0005603435 nova_compute[239938]: 2026-01-31 05:00:46.343 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:46 np0005603435 nova_compute[239938]: 2026-01-31 05:00:46.815 239942 DEBUG nova.compute.manager [req-c9923b72-6893-4880-8993-dcaec2ffc7fd req-3926d625-78f0-492f-adcd-d85bdbf67ede c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received event network-changed-09b730e5-cc74-4a8e-894c-91cd51072e1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:46 np0005603435 nova_compute[239938]: 2026-01-31 05:00:46.816 239942 DEBUG nova.compute.manager [req-c9923b72-6893-4880-8993-dcaec2ffc7fd req-3926d625-78f0-492f-adcd-d85bdbf67ede c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Refreshing instance network info cache due to event network-changed-09b730e5-cc74-4a8e-894c-91cd51072e1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 00:00:46 np0005603435 nova_compute[239938]: 2026-01-31 05:00:46.816 239942 DEBUG oslo_concurrency.lockutils [req-c9923b72-6893-4880-8993-dcaec2ffc7fd req-3926d625-78f0-492f-adcd-d85bdbf67ede c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:00:46 np0005603435 nova_compute[239938]: 2026-01-31 05:00:46.816 239942 DEBUG oslo_concurrency.lockutils [req-c9923b72-6893-4880-8993-dcaec2ffc7fd req-3926d625-78f0-492f-adcd-d85bdbf67ede c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:00:46 np0005603435 nova_compute[239938]: 2026-01-31 05:00:46.817 239942 DEBUG nova.network.neutron [req-c9923b72-6893-4880-8993-dcaec2ffc7fd req-3926d625-78f0-492f-adcd-d85bdbf67ede c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Refreshing network info cache for port 09b730e5-cc74-4a8e-894c-91cd51072e1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 00:00:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 186 op/s
Jan 31 00:00:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:49 np0005603435 nova_compute[239938]: 2026-01-31 05:00:49.485 239942 DEBUG nova.network.neutron [req-c9923b72-6893-4880-8993-dcaec2ffc7fd req-3926d625-78f0-492f-adcd-d85bdbf67ede c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Updated VIF entry in instance network info cache for port 09b730e5-cc74-4a8e-894c-91cd51072e1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 00:00:49 np0005603435 nova_compute[239938]: 2026-01-31 05:00:49.486 239942 DEBUG nova.network.neutron [req-c9923b72-6893-4880-8993-dcaec2ffc7fd req-3926d625-78f0-492f-adcd-d85bdbf67ede c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Updating instance_info_cache with network_info: [{"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:00:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 105 op/s
Jan 31 00:00:49 np0005603435 nova_compute[239938]: 2026-01-31 05:00:49.679 239942 DEBUG oslo_concurrency.lockutils [req-c9923b72-6893-4880-8993-dcaec2ffc7fd req-3926d625-78f0-492f-adcd-d85bdbf67ede c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-79e4d808-e888-48d3-8b42-c6e0d9350d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:00:50 np0005603435 nova_compute[239938]: 2026-01-31 05:00:50.699 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:51 np0005603435 nova_compute[239938]: 2026-01-31 05:00:51.345 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 105 op/s
Jan 31 00:00:53 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 00:00:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 76 op/s
Jan 31 00:00:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:54 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:54Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:f4:2b 10.100.0.9
Jan 31 00:00:54 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:54Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:f4:2b 10.100.0.9
Jan 31 00:00:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 990 KiB/s wr, 89 op/s
Jan 31 00:00:55 np0005603435 nova_compute[239938]: 2026-01-31 05:00:55.702 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:55.924 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:55.925 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:55.926 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:56 np0005603435 nova_compute[239938]: 2026-01-31 05:00:56.348 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.175 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.175 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.176 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.176 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.176 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.177 239942 INFO nova.compute.manager [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Terminating instance#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.178 239942 DEBUG nova.compute.manager [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 00:00:58 np0005603435 kernel: tap27384399-6d (unregistering): left promiscuous mode
Jan 31 00:00:58 np0005603435 NetworkManager[49097]: <info>  [1769835658.2518] device (tap27384399-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.260 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:58 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:58Z|00261|binding|INFO|Releasing lport 27384399-6d62-46d0-a4c1-3ef6d37998a7 from this chassis (sb_readonly=0)
Jan 31 00:00:58 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:58Z|00262|binding|INFO|Setting lport 27384399-6d62-46d0-a4c1-3ef6d37998a7 down in Southbound
Jan 31 00:00:58 np0005603435 ovn_controller[145670]: 2026-01-31T05:00:58Z|00263|binding|INFO|Removing iface tap27384399-6d ovn-installed in OVS
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.263 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.268 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:2a:e5 10.100.0.12'], port_security=['fa:16:3e:86:2a:e5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e9012993-27a3-4599-ba2e-d9f3ecf2551e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f4019d294054f68b35b8f860129d22b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cce6ab71-eb98-451e-8f5c-5676889e02eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=383ad1c1-534f-47b8-ad27-6921a5514a36, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=27384399-6d62-46d0-a4c1-3ef6d37998a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.269 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 27384399-6d62-46d0-a4c1-3ef6d37998a7 in datapath 3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45 unbound from our chassis#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.271 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.272 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f1559df8-5e5c-4ca4-9fcd-904491d68407]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.272 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45 namespace which is not needed anymore#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.273 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:58 np0005603435 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 31 00:00:58 np0005603435 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 14.250s CPU time.
Jan 31 00:00:58 np0005603435 systemd-machined[208030]: Machine qemu-26-instance-0000001a terminated.
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.417 239942 INFO nova.virt.libvirt.driver [-] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Instance destroyed successfully.#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.419 239942 DEBUG nova.objects.instance [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lazy-loading 'resources' on Instance uuid e9012993-27a3-4599-ba2e-d9f3ecf2551e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.438 239942 DEBUG nova.virt.libvirt.vif [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T05:00:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-1097842089',display_name='tempest-instance-1097842089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1097842089',id=26,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMD3YnigGV3CL4giyFwlCjNM/6UHSkzWnApipbiRasjH338Xq5rHDDpCVCLLdljitMAt2WDx7ntFxYCKGX5r1AqnkrymoA5QWnrZy5vEJiKAOFpbaaA7QlN8aHJto9IoCQ==',key_name='tempest-keypair-933048906',keypairs=<?>,launch_index=0,launched_at=2026-01-31T05:00:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f4019d294054f68b35b8f860129d22b',ramdisk_id='',reservation_id='r-r0ikr5re',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-24587387',owner_user_name='tempest-VolumesBackupsTest-24587387-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T05:00:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2dc5826041a84e3897b017d9ad6bbe2c',uuid=e9012993-27a3-4599-ba2e-d9f3ecf2551e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 00:00:58 np0005603435 neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45[271656]: [NOTICE]   (271660) : haproxy version is 2.8.14-c23fe91
Jan 31 00:00:58 np0005603435 neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45[271656]: [NOTICE]   (271660) : path to executable is /usr/sbin/haproxy
Jan 31 00:00:58 np0005603435 neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45[271656]: [WARNING]  (271660) : Exiting Master process...
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.439 239942 DEBUG nova.network.os_vif_util [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Converting VIF {"id": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "address": "fa:16:3e:86:2a:e5", "network": {"id": "3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1581492157-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f4019d294054f68b35b8f860129d22b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27384399-6d", "ovs_interfaceid": "27384399-6d62-46d0-a4c1-3ef6d37998a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:00:58 np0005603435 neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45[271656]: [ALERT]    (271660) : Current worker (271662) exited with code 143 (Terminated)
Jan 31 00:00:58 np0005603435 neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45[271656]: [WARNING]  (271660) : All workers exited. Exiting... (0)
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.441 239942 DEBUG nova.network.os_vif_util [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:2a:e5,bridge_name='br-int',has_traffic_filtering=True,id=27384399-6d62-46d0-a4c1-3ef6d37998a7,network=Network(3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27384399-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.441 239942 DEBUG os_vif [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:2a:e5,bridge_name='br-int',has_traffic_filtering=True,id=27384399-6d62-46d0-a4c1-3ef6d37998a7,network=Network(3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27384399-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.443 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:58 np0005603435 systemd[1]: libpod-ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146.scope: Deactivated successfully.
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.443 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27384399-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.446 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.448 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:58 np0005603435 podman[272250]: 2026-01-31 05:00:58.451085284 +0000 UTC m=+0.069585580 container died ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.452 239942 INFO os_vif [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:2a:e5,bridge_name='br-int',has_traffic_filtering=True,id=27384399-6d62-46d0-a4c1-3ef6d37998a7,network=Network(3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27384399-6d')#033[00m
Jan 31 00:00:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146-userdata-shm.mount: Deactivated successfully.
Jan 31 00:00:58 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a63b80261362f8328c5fb7abbf796562e976e3c341cf9daa110a875313c364e4-merged.mount: Deactivated successfully.
Jan 31 00:00:58 np0005603435 podman[272250]: 2026-01-31 05:00:58.504134777 +0000 UTC m=+0.122635033 container cleanup ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 00:00:58 np0005603435 systemd[1]: libpod-conmon-ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146.scope: Deactivated successfully.
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.522 239942 DEBUG nova.compute.manager [req-8af87cb4-f0cc-4ecc-ab55-35492170f345 req-197704b7-de36-4a66-8257-1cc9b25fe1f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received event network-vif-unplugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.523 239942 DEBUG oslo_concurrency.lockutils [req-8af87cb4-f0cc-4ecc-ab55-35492170f345 req-197704b7-de36-4a66-8257-1cc9b25fe1f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.524 239942 DEBUG oslo_concurrency.lockutils [req-8af87cb4-f0cc-4ecc-ab55-35492170f345 req-197704b7-de36-4a66-8257-1cc9b25fe1f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.524 239942 DEBUG oslo_concurrency.lockutils [req-8af87cb4-f0cc-4ecc-ab55-35492170f345 req-197704b7-de36-4a66-8257-1cc9b25fe1f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.525 239942 DEBUG nova.compute.manager [req-8af87cb4-f0cc-4ecc-ab55-35492170f345 req-197704b7-de36-4a66-8257-1cc9b25fe1f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] No waiting events found dispatching network-vif-unplugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.525 239942 DEBUG nova.compute.manager [req-8af87cb4-f0cc-4ecc-ab55-35492170f345 req-197704b7-de36-4a66-8257-1cc9b25fe1f0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received event network-vif-unplugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 00:00:58 np0005603435 podman[272306]: 2026-01-31 05:00:58.585170541 +0000 UTC m=+0.051128708 container remove ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.591 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c308a3-2650-4eca-bdff-a0f665d2d57b]: (4, ('Sat Jan 31 05:00:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45 (ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146)\nad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146\nSat Jan 31 05:00:58 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45 (ad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146)\nad3c1a0b1ca9fbfcfe342cd6c0ddc427bdd68e01afeb21b8250ae953dab0e146\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.593 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3d822220-0b54-423d-b3cf-1cab84a48883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.594 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b2084dc-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.596 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:58 np0005603435 kernel: tap3b2084dc-b0: left promiscuous mode
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.605 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.612 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ff894798-2b14-4bcb-bf65-b8c950b1448d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.626 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3b8bcd-f2aa-47f8-9540-25a9f636e794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.628 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[7453d491-c647-45b0-93a9-2cda0ac050a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.643 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2e4cc24a-96d9-4ded-bf04-56ef55906c4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467732, 'reachable_time': 28330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272321, 'error': None, 'target': 'ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:58 np0005603435 systemd[1]: run-netns-ovnmeta\x2d3b2084dc\x2dbcc8\x2d4de8\x2d9f4d\x2dc4cdda00eb45.mount: Deactivated successfully.
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.647 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b2084dc-bcc8-4de8-9f4d-c4cdda00eb45 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 00:00:58 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:00:58.647 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[529f31c8-e31d-44d7-a27f-97800b80ed3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:00:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.925 239942 INFO nova.virt.libvirt.driver [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Deleting instance files /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e_del#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.926 239942 INFO nova.virt.libvirt.driver [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Deletion of /var/lib/nova/instances/e9012993-27a3-4599-ba2e-d9f3ecf2551e_del complete#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.973 239942 INFO nova.compute.manager [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.974 239942 DEBUG oslo.service.loopingcall [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.974 239942 DEBUG nova.compute.manager [-] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 00:00:58 np0005603435 nova_compute[239938]: 2026-01-31 05:00:58.975 239942 DEBUG nova.network.neutron [-] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 00:00:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.028 239942 DEBUG nova.network.neutron [-] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.044 239942 INFO nova.compute.manager [-] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Took 1.07 seconds to deallocate network for instance.#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.124 239942 DEBUG nova.compute.manager [req-d7e4f89f-c343-4b28-a500-c513d2b736bc req-4bb295f8-3f97-4a44-8f48-7df7a426a16f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received event network-vif-deleted-27384399-6d62-46d0-a4c1-3ef6d37998a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.267 239942 INFO nova.compute.manager [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.317 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.318 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.358 239942 DEBUG nova.scheduler.client.report [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Refreshing inventories for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.376 239942 DEBUG nova.scheduler.client.report [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Updating ProviderTree inventory for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.377 239942 DEBUG nova.compute.provider_tree [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Updating inventory in ProviderTree for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.623 239942 DEBUG nova.scheduler.client.report [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Refreshing aggregate associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.656 239942 DEBUG nova.scheduler.client.report [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Refreshing trait associations for resource provider 4d0a6937-09c9-4e01-94bd-2812940db2bc, traits: COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.705 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.724 239942 DEBUG nova.compute.manager [req-8f844474-e5b9-4b2f-92f0-90f52e925f58 req-f81997b0-39fa-4b73-ac52-da1705ca5a06 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received event network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.725 239942 DEBUG oslo_concurrency.lockutils [req-8f844474-e5b9-4b2f-92f0-90f52e925f58 req-f81997b0-39fa-4b73-ac52-da1705ca5a06 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.726 239942 DEBUG oslo_concurrency.lockutils [req-8f844474-e5b9-4b2f-92f0-90f52e925f58 req-f81997b0-39fa-4b73-ac52-da1705ca5a06 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.726 239942 DEBUG oslo_concurrency.lockutils [req-8f844474-e5b9-4b2f-92f0-90f52e925f58 req-f81997b0-39fa-4b73-ac52-da1705ca5a06 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.726 239942 DEBUG nova.compute.manager [req-8f844474-e5b9-4b2f-92f0-90f52e925f58 req-f81997b0-39fa-4b73-ac52-da1705ca5a06 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] No waiting events found dispatching network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.727 239942 WARNING nova.compute.manager [req-8f844474-e5b9-4b2f-92f0-90f52e925f58 req-f81997b0-39fa-4b73-ac52-da1705ca5a06 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Received unexpected event network-vif-plugged-27384399-6d62-46d0-a4c1-3ef6d37998a7 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 00:01:00 np0005603435 nova_compute[239938]: 2026-01-31 05:01:00.779 239942 DEBUG oslo_concurrency.processutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:01:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335671728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:01:01 np0005603435 nova_compute[239938]: 2026-01-31 05:01:01.384 239942 DEBUG oslo_concurrency.processutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:01 np0005603435 nova_compute[239938]: 2026-01-31 05:01:01.391 239942 DEBUG nova.compute.provider_tree [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:01:01 np0005603435 nova_compute[239938]: 2026-01-31 05:01:01.414 239942 DEBUG nova.scheduler.client.report [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:01:01 np0005603435 nova_compute[239938]: 2026-01-31 05:01:01.445 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:01 np0005603435 nova_compute[239938]: 2026-01-31 05:01:01.489 239942 INFO nova.scheduler.client.report [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Deleted allocations for instance e9012993-27a3-4599-ba2e-d9f3ecf2551e#033[00m
Jan 31 00:01:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 00:01:01 np0005603435 nova_compute[239938]: 2026-01-31 05:01:01.594 239942 DEBUG oslo_concurrency.lockutils [None req-9bf55aac-5ab3-4896-be79-b07cac1a8bc0 2dc5826041a84e3897b017d9ad6bbe2c 6f4019d294054f68b35b8f860129d22b - - default default] Lock "e9012993-27a3-4599-ba2e-d9f3ecf2551e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:01:01 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3943102538' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:01:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Jan 31 00:01:01 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Jan 31 00:01:01 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Jan 31 00:01:02 np0005603435 nova_compute[239938]: 2026-01-31 05:01:02.943 239942 DEBUG oslo_concurrency.lockutils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:02 np0005603435 nova_compute[239938]: 2026-01-31 05:01:02.944 239942 DEBUG oslo_concurrency.lockutils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:02 np0005603435 nova_compute[239938]: 2026-01-31 05:01:02.965 239942 DEBUG nova.objects.instance [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'flavor' on Instance uuid 79e4d808-e888-48d3-8b42-c6e0d9350d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:01:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Jan 31 00:01:02 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Jan 31 00:01:02 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Jan 31 00:01:02 np0005603435 nova_compute[239938]: 2026-01-31 05:01:02.998 239942 DEBUG oslo_concurrency.lockutils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.296 239942 DEBUG oslo_concurrency.lockutils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.297 239942 DEBUG oslo_concurrency.lockutils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.298 239942 INFO nova.compute.manager [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Attaching volume 71a9fe8a-ffcb-4c1c-8440-d74282a54e27 to /dev/vdb#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.416 239942 DEBUG os_brick.utils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.418 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.430 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.431 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a82798-5156-43a0-8c87-0435c681815b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.432 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.439 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.440 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[eebe0211-4d57-4fc2-92a9-9ceff4041cd1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.442 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.447 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.450 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.451 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[89c5518a-e2e2-4aed-a167-b23203d6c004]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.452 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[33f0f1f6-abb0-41d5-925d-f41cad617a31]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.453 239942 DEBUG oslo_concurrency.processutils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.474 239942 DEBUG oslo_concurrency.processutils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.478 239942 DEBUG os_brick.initiator.connectors.lightos [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.478 239942 DEBUG os_brick.initiator.connectors.lightos [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.478 239942 DEBUG os_brick.initiator.connectors.lightos [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.479 239942 DEBUG os_brick.utils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 00:01:03 np0005603435 nova_compute[239938]: 2026-01-31 05:01:03.479 239942 DEBUG nova.virt.block_device [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Updating existing volume attachment record: 276a922e-4c46-4cd0-bb3a-34299498e42a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 00:01:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 31 00:01:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e471 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:01:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2457814856' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.443 239942 DEBUG os_brick.encryptors [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Using volume encryption metadata '{'encryption_key_id': '54f69cd2-2209-42ca-86c8-4b8b0f0da371', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-71a9fe8a-ffcb-4c1c-8440-d74282a54e27', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '71a9fe8a-ffcb-4c1c-8440-d74282a54e27', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '79e4d808-e888-48d3-8b42-c6e0d9350d37', 'attached_at': '', 'detached_at': '', 'volume_id': '71a9fe8a-ffcb-4c1c-8440-d74282a54e27', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.453 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.478 239942 DEBUG barbicanclient.v1.secrets [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.479 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.503 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.504 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.522 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.523 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.543 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.544 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.568 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.569 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.587 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.588 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.606 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.607 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.625 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.626 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.646 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.647 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.665 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.666 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.690 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.691 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.708 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.709 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.739 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.740 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.758 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.759 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.782 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.783 239942 INFO barbicanclient.base [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/54f69cd2-2209-42ca-86c8-4b8b0f0da371#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.808 239942 DEBUG barbicanclient.client [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.809 239942 DEBUG nova.virt.libvirt.host [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 31 00:01:04 np0005603435 nova_compute[239938]:    <volume>71a9fe8a-ffcb-4c1c-8440-d74282a54e27</volume>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  </usage>
Jan 31 00:01:04 np0005603435 nova_compute[239938]: </secret>
Jan 31 00:01:04 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.824 239942 DEBUG nova.objects.instance [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'flavor' on Instance uuid 79e4d808-e888-48d3-8b42-c6e0d9350d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.854 239942 DEBUG nova.virt.libvirt.driver [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Attempting to attach volume 71a9fe8a-ffcb-4c1c-8440-d74282a54e27 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 00:01:04 np0005603435 nova_compute[239938]: 2026-01-31 05:01:04.857 239942 DEBUG nova.virt.libvirt.guest [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-71a9fe8a-ffcb-4c1c-8440-d74282a54e27">
Jan 31 00:01:04 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  </source>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  <auth username="openstack">
Jan 31 00:01:04 np0005603435 nova_compute[239938]:    <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  </auth>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  <serial>71a9fe8a-ffcb-4c1c-8440-d74282a54e27</serial>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  <encryption format="luks">
Jan 31 00:01:04 np0005603435 nova_compute[239938]:    <secret type="passphrase" uuid="06ae232a-36e3-4df5-81f0-ccef27699ae9"/>
Jan 31 00:01:04 np0005603435 nova_compute[239938]:  </encryption>
Jan 31 00:01:04 np0005603435 nova_compute[239938]: </disk>
Jan 31 00:01:04 np0005603435 nova_compute[239938]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 00:01:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Jan 31 00:01:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Jan 31 00:01:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Jan 31 00:01:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Jan 31 00:01:05 np0005603435 nova_compute[239938]: 2026-01-31 05:01:05.742 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_05:01:06
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'images', 'default.rgw.control', '.mgr', 'volumes', 'vms', 'backups', '.rgw.root']
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:01:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:01:07 np0005603435 nova_compute[239938]: 2026-01-31 05:01:07.173 239942 DEBUG nova.virt.libvirt.driver [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:01:07 np0005603435 nova_compute[239938]: 2026-01-31 05:01:07.173 239942 DEBUG nova.virt.libvirt.driver [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:01:07 np0005603435 nova_compute[239938]: 2026-01-31 05:01:07.173 239942 DEBUG nova.virt.libvirt.driver [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:01:07 np0005603435 nova_compute[239938]: 2026-01-31 05:01:07.174 239942 DEBUG nova.virt.libvirt.driver [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No VIF found with MAC fa:16:3e:85:f4:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 00:01:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Jan 31 00:01:07 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Jan 31 00:01:07 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Jan 31 00:01:07 np0005603435 nova_compute[239938]: 2026-01-31 05:01:07.381 239942 DEBUG oslo_concurrency.lockutils [None req-07eade0d-08fe-403e-918f-8bf11f390e46 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 185 op/s
Jan 31 00:01:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 00:01:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:01:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:01:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:01:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:01:08 np0005603435 podman[272383]: 2026-01-31 05:01:08.107458879 +0000 UTC m=+0.070085042 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 00:01:08 np0005603435 podman[272384]: 2026-01-31 05:01:08.180045681 +0000 UTC m=+0.136927926 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.252 239942 DEBUG oslo_concurrency.lockutils [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.252 239942 DEBUG oslo_concurrency.lockutils [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.275 239942 INFO nova.compute.manager [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Detaching volume 71a9fe8a-ffcb-4c1c-8440-d74282a54e27#033[00m
Jan 31 00:01:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 00:01:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:01:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:01:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:01:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.399 239942 INFO nova.virt.block_device [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Attempting to driver detach volume 71a9fe8a-ffcb-4c1c-8440-d74282a54e27 from mountpoint /dev/vdb#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.448 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.510 239942 DEBUG os_brick.encryptors [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Using volume encryption metadata '{'encryption_key_id': '54f69cd2-2209-42ca-86c8-4b8b0f0da371', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-71a9fe8a-ffcb-4c1c-8440-d74282a54e27', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '71a9fe8a-ffcb-4c1c-8440-d74282a54e27', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '79e4d808-e888-48d3-8b42-c6e0d9350d37', 'attached_at': '', 'detached_at': '', 'volume_id': '71a9fe8a-ffcb-4c1c-8440-d74282a54e27', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.521 239942 DEBUG nova.virt.libvirt.driver [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Attempting to detach device vdb from instance 79e4d808-e888-48d3-8b42-c6e0d9350d37 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.522 239942 DEBUG nova.virt.libvirt.guest [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-71a9fe8a-ffcb-4c1c-8440-d74282a54e27">
Jan 31 00:01:08 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  </source>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <serial>71a9fe8a-ffcb-4c1c-8440-d74282a54e27</serial>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <encryption format="luks">
Jan 31 00:01:08 np0005603435 nova_compute[239938]:    <secret type="passphrase" uuid="06ae232a-36e3-4df5-81f0-ccef27699ae9"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  </encryption>
Jan 31 00:01:08 np0005603435 nova_compute[239938]: </disk>
Jan 31 00:01:08 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.532 239942 INFO nova.virt.libvirt.driver [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Successfully detached device vdb from instance 79e4d808-e888-48d3-8b42-c6e0d9350d37 from the persistent domain config.#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.533 239942 DEBUG nova.virt.libvirt.driver [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 79e4d808-e888-48d3-8b42-c6e0d9350d37 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.534 239942 DEBUG nova.virt.libvirt.guest [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <source protocol="rbd" name="volumes/volume-71a9fe8a-ffcb-4c1c-8440-d74282a54e27">
Jan 31 00:01:08 np0005603435 nova_compute[239938]:    <host name="192.168.122.100" port="6789"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  </source>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <target dev="vdb" bus="virtio"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <serial>71a9fe8a-ffcb-4c1c-8440-d74282a54e27</serial>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  <encryption format="luks">
Jan 31 00:01:08 np0005603435 nova_compute[239938]:    <secret type="passphrase" uuid="06ae232a-36e3-4df5-81f0-ccef27699ae9"/>
Jan 31 00:01:08 np0005603435 nova_compute[239938]:  </encryption>
Jan 31 00:01:08 np0005603435 nova_compute[239938]: </disk>
Jan 31 00:01:08 np0005603435 nova_compute[239938]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.652 239942 DEBUG nova.virt.libvirt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Received event <DeviceRemovedEvent: 1769835668.651752, 79e4d808-e888-48d3-8b42-c6e0d9350d37 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.654 239942 DEBUG nova.virt.libvirt.driver [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 79e4d808-e888-48d3-8b42-c6e0d9350d37 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.658 239942 INFO nova.virt.libvirt.driver [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Successfully detached device vdb from instance 79e4d808-e888-48d3-8b42-c6e0d9350d37 from the live domain config.#033[00m
Jan 31 00:01:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:08 np0005603435 nova_compute[239938]: 2026-01-31 05:01:08.967 239942 DEBUG nova.objects.instance [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'flavor' on Instance uuid 79e4d808-e888-48d3-8b42-c6e0d9350d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.042 239942 DEBUG oslo_concurrency.lockutils [None req-d8795423-8c4e-4f14-8af5-78052438d4be 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 115 op/s
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.922 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.922 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.923 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.924 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.924 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.927 239942 INFO nova.compute.manager [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Terminating instance#033[00m
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.929 239942 DEBUG nova.compute.manager [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 00:01:09 np0005603435 kernel: tap09b730e5-cc (unregistering): left promiscuous mode
Jan 31 00:01:09 np0005603435 NetworkManager[49097]: <info>  [1769835669.9724] device (tap09b730e5-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 00:01:09 np0005603435 ovn_controller[145670]: 2026-01-31T05:01:09Z|00264|binding|INFO|Releasing lport 09b730e5-cc74-4a8e-894c-91cd51072e1f from this chassis (sb_readonly=0)
Jan 31 00:01:09 np0005603435 ovn_controller[145670]: 2026-01-31T05:01:09Z|00265|binding|INFO|Setting lport 09b730e5-cc74-4a8e-894c-91cd51072e1f down in Southbound
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.981 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:09 np0005603435 ovn_controller[145670]: 2026-01-31T05:01:09Z|00266|binding|INFO|Removing iface tap09b730e5-cc ovn-installed in OVS
Jan 31 00:01:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:09.992 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:f4:2b 10.100.0.9'], port_security=['fa:16:3e:85:f4:2b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '79e4d808-e888-48d3-8b42-c6e0d9350d37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f25b83f-b794-417e-88e7-d89c680f473d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48935f8745744c4ba5400c13f80e0379', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bff456ce-01a2-4b10-8073-b174ddc2a585', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94c57d33-0e3a-4b86-87cd-ae1ca9bb064d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=09b730e5-cc74-4a8e-894c-91cd51072e1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:01:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:09.993 156017 INFO neutron.agent.ovn.metadata.agent [-] Port 09b730e5-cc74-4a8e-894c-91cd51072e1f in datapath 2f25b83f-b794-417e-88e7-d89c680f473d unbound from our chassis#033[00m
Jan 31 00:01:09 np0005603435 nova_compute[239938]: 2026-01-31 05:01:09.994 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:09.996 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f25b83f-b794-417e-88e7-d89c680f473d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 00:01:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:09.997 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[91027460-a198-4cc1-adf4-98192fcc37c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:09 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:09.997 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d namespace which is not needed anymore#033[00m
Jan 31 00:01:10 np0005603435 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 31 00:01:10 np0005603435 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 15.596s CPU time.
Jan 31 00:01:10 np0005603435 systemd-machined[208030]: Machine qemu-27-instance-0000001b terminated.
Jan 31 00:01:10 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[272211]: [NOTICE]   (272215) : haproxy version is 2.8.14-c23fe91
Jan 31 00:01:10 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[272211]: [NOTICE]   (272215) : path to executable is /usr/sbin/haproxy
Jan 31 00:01:10 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[272211]: [WARNING]  (272215) : Exiting Master process...
Jan 31 00:01:10 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[272211]: [WARNING]  (272215) : Exiting Master process...
Jan 31 00:01:10 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[272211]: [ALERT]    (272215) : Current worker (272217) exited with code 143 (Terminated)
Jan 31 00:01:10 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[272211]: [WARNING]  (272215) : All workers exited. Exiting... (0)
Jan 31 00:01:10 np0005603435 systemd[1]: libpod-8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870.scope: Deactivated successfully.
Jan 31 00:01:10 np0005603435 podman[272457]: 2026-01-31 05:01:10.116487496 +0000 UTC m=+0.042341017 container died 8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 00:01:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870-userdata-shm.mount: Deactivated successfully.
Jan 31 00:01:10 np0005603435 systemd[1]: var-lib-containers-storage-overlay-338b9f0aa9827daff8c59f62f4a36b319766225c3e9bb7db91a4b4eb4eceaee7-merged.mount: Deactivated successfully.
Jan 31 00:01:10 np0005603435 podman[272457]: 2026-01-31 05:01:10.155579244 +0000 UTC m=+0.081432755 container cleanup 8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 00:01:10 np0005603435 systemd[1]: libpod-conmon-8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870.scope: Deactivated successfully.
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.163 239942 INFO nova.virt.libvirt.driver [-] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Instance destroyed successfully.#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.163 239942 DEBUG nova.objects.instance [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'resources' on Instance uuid 79e4d808-e888-48d3-8b42-c6e0d9350d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:01:10 np0005603435 podman[272499]: 2026-01-31 05:01:10.212694964 +0000 UTC m=+0.039325275 container remove 8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.216 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb3fabd-58eb-47eb-aff1-256524d19aad]: (4, ('Sat Jan 31 05:01:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d (8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870)\n8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870\nSat Jan 31 05:01:10 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d (8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870)\n8f54ad1029be14ad2b736d16ceb2ba31d7233e41059ddec45af25fd8f0d1b870\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.217 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[9a897d2b-8f31-442f-a7f6-18cfa3db3005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.218 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f25b83f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.221 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:10 np0005603435 kernel: tap2f25b83f-b0: left promiscuous mode
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.229 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.229 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.231 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b04ab991-b427-43ee-aa4f-06257a55a91b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.241 239942 DEBUG nova.compute.manager [req-a2834993-0932-4d5d-a10f-48732f47c0c2 req-d4fac2ee-0f81-46de-bcff-fafd7785bd2f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received event network-vif-unplugged-09b730e5-cc74-4a8e-894c-91cd51072e1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.242 239942 DEBUG oslo_concurrency.lockutils [req-a2834993-0932-4d5d-a10f-48732f47c0c2 req-d4fac2ee-0f81-46de-bcff-fafd7785bd2f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.243 239942 DEBUG oslo_concurrency.lockutils [req-a2834993-0932-4d5d-a10f-48732f47c0c2 req-d4fac2ee-0f81-46de-bcff-fafd7785bd2f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.243 239942 DEBUG oslo_concurrency.lockutils [req-a2834993-0932-4d5d-a10f-48732f47c0c2 req-d4fac2ee-0f81-46de-bcff-fafd7785bd2f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.243 239942 DEBUG nova.compute.manager [req-a2834993-0932-4d5d-a10f-48732f47c0c2 req-d4fac2ee-0f81-46de-bcff-fafd7785bd2f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] No waiting events found dispatching network-vif-unplugged-09b730e5-cc74-4a8e-894c-91cd51072e1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.244 239942 DEBUG nova.compute.manager [req-a2834993-0932-4d5d-a10f-48732f47c0c2 req-d4fac2ee-0f81-46de-bcff-fafd7785bd2f c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received event network-vif-unplugged-09b730e5-cc74-4a8e-894c-91cd51072e1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.247 239942 DEBUG nova.virt.libvirt.vif [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T05:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2028458698',display_name='tempest-TestEncryptedCinderVolumes-server-2028458698',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2028458698',id=27,image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4WqnO4wVm4Dct+29WZNqsQJvDZ7+oMnvkdZxGKMg53aAhI8Wpy9rzJCw1uDdLfABmpfltRhDa933aDbvtyuE/HbkfaGwe1QUgyVtWz6jiDO3dH5hSEqs/4G0+tuU1raw==',key_name='tempest-keypair-1940583653',keypairs=<?>,launch_index=0,launched_at=2026-01-31T05:00:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-kffugcm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bf004ad8-fb70-4caa-9170-9f02e22d687d',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T05:00:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6784d92c92b24526a302a1a74a813c76',uuid=79e4d808-e888-48d3-8b42-c6e0d9350d37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.247 239942 DEBUG nova.network.os_vif_util [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "address": "fa:16:3e:85:f4:2b", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09b730e5-cc", "ovs_interfaceid": "09b730e5-cc74-4a8e-894c-91cd51072e1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.248 239942 DEBUG nova.network.os_vif_util [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:f4:2b,bridge_name='br-int',has_traffic_filtering=True,id=09b730e5-cc74-4a8e-894c-91cd51072e1f,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09b730e5-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.249 239942 DEBUG os_vif [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:f4:2b,bridge_name='br-int',has_traffic_filtering=True,id=09b730e5-cc74-4a8e-894c-91cd51072e1f,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09b730e5-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.251 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.252 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09b730e5-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.254 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.256 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[90eab343-552f-4790-a4ea-1aabd094cb19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.257 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.258 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[95da029c-21af-4a36-9c7a-784affb24447]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.260 239942 INFO os_vif [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:f4:2b,bridge_name='br-int',has_traffic_filtering=True,id=09b730e5-cc74-4a8e-894c-91cd51072e1f,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09b730e5-cc')#033[00m
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.272 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cbefc2ae-b96a-44e5-acdc-a4746ae90189]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469672, 'reachable_time': 20822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272519, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.275 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 00:01:10 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:10.275 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[720acbe4-ad47-4ee3-9593-2d7ba668c923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:10 np0005603435 systemd[1]: run-netns-ovnmeta\x2d2f25b83f\x2db794\x2d417e\x2d88e7\x2dd89c680f473d.mount: Deactivated successfully.
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.535 239942 INFO nova.virt.libvirt.driver [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Deleting instance files /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37_del#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.536 239942 INFO nova.virt.libvirt.driver [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Deletion of /var/lib/nova/instances/79e4d808-e888-48d3-8b42-c6e0d9350d37_del complete#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.587 239942 INFO nova.compute.manager [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.588 239942 DEBUG oslo.service.loopingcall [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.589 239942 DEBUG nova.compute.manager [-] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.589 239942 DEBUG nova.network.neutron [-] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 00:01:10 np0005603435 nova_compute[239938]: 2026-01-31 05:01:10.744 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:11 np0005603435 nova_compute[239938]: 2026-01-31 05:01:11.388 239942 DEBUG nova.network.neutron [-] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:01:11 np0005603435 nova_compute[239938]: 2026-01-31 05:01:11.429 239942 INFO nova.compute.manager [-] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Took 0.84 seconds to deallocate network for instance.#033[00m
Jan 31 00:01:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.8 MiB/s wr, 114 op/s
Jan 31 00:01:11 np0005603435 nova_compute[239938]: 2026-01-31 05:01:11.595 239942 DEBUG nova.compute.manager [req-c7554628-712c-4457-b465-ee409762ebb8 req-de1e34c0-8753-4d6f-a4bb-07a974b641fc c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received event network-vif-deleted-09b730e5-cc74-4a8e-894c-91cd51072e1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:01:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Jan 31 00:01:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Jan 31 00:01:11 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Jan 31 00:01:11 np0005603435 nova_compute[239938]: 2026-01-31 05:01:11.661 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:11 np0005603435 nova_compute[239938]: 2026-01-31 05:01:11.662 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:11 np0005603435 nova_compute[239938]: 2026-01-31 05:01:11.725 239942 DEBUG oslo_concurrency.processutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:01:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/280282579' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.238 239942 DEBUG oslo_concurrency.processutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.248 239942 DEBUG nova.compute.provider_tree [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.304 239942 DEBUG nova.scheduler.client.report [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.369 239942 DEBUG nova.compute.manager [req-91310a41-f3f8-4e44-bbfc-1e444325abf7 req-1c971a2a-7a54-4db6-aa7e-982f41a493c4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received event network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.369 239942 DEBUG oslo_concurrency.lockutils [req-91310a41-f3f8-4e44-bbfc-1e444325abf7 req-1c971a2a-7a54-4db6-aa7e-982f41a493c4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.370 239942 DEBUG oslo_concurrency.lockutils [req-91310a41-f3f8-4e44-bbfc-1e444325abf7 req-1c971a2a-7a54-4db6-aa7e-982f41a493c4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.370 239942 DEBUG oslo_concurrency.lockutils [req-91310a41-f3f8-4e44-bbfc-1e444325abf7 req-1c971a2a-7a54-4db6-aa7e-982f41a493c4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.371 239942 DEBUG nova.compute.manager [req-91310a41-f3f8-4e44-bbfc-1e444325abf7 req-1c971a2a-7a54-4db6-aa7e-982f41a493c4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] No waiting events found dispatching network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.372 239942 WARNING nova.compute.manager [req-91310a41-f3f8-4e44-bbfc-1e444325abf7 req-1c971a2a-7a54-4db6-aa7e-982f41a493c4 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Received unexpected event network-vif-plugged-09b730e5-cc74-4a8e-894c-91cd51072e1f for instance with vm_state deleted and task_state None.#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.373 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.435 239942 INFO nova.scheduler.client.report [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Deleted allocations for instance 79e4d808-e888-48d3-8b42-c6e0d9350d37#033[00m
Jan 31 00:01:12 np0005603435 nova_compute[239938]: 2026-01-31 05:01:12.842 239942 DEBUG oslo_concurrency.lockutils [None req-d6fd3895-6fd0-4571-a8ef-c4da993430f5 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "79e4d808-e888-48d3-8b42-c6e0d9350d37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 00:01:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 00:01:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:13 np0005603435 nova_compute[239938]: 2026-01-31 05:01:13.415 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835658.4133801, e9012993-27a3-4599-ba2e-d9f3ecf2551e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:01:13 np0005603435 nova_compute[239938]: 2026-01-31 05:01:13.415 239942 INFO nova.compute.manager [-] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] VM Stopped (Lifecycle Event)#033[00m
Jan 31 00:01:13 np0005603435 nova_compute[239938]: 2026-01-31 05:01:13.436 239942 DEBUG nova.compute.manager [None req-e73be478-867f-45da-88b6-ed322ebf8442 - - - - - -] [instance: e9012993-27a3-4599-ba2e-d9f3ecf2551e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:01:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 443 KiB/s rd, 1.7 MiB/s wr, 99 op/s
Jan 31 00:01:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:01:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/710842419' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:01:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e474 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:01:14 np0005603435 podman[272776]: 2026-01-31 05:01:14.590011605 +0000 UTC m=+0.033161686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:14 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:01:15 np0005603435 podman[272776]: 2026-01-31 05:01:15.134459497 +0000 UTC m=+0.577609538 container create 9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_euclid, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 00:01:15 np0005603435 nova_compute[239938]: 2026-01-31 05:01:15.255 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:15 np0005603435 systemd[1]: Started libpod-conmon-9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de.scope.
Jan 31 00:01:15 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:01:15 np0005603435 podman[272776]: 2026-01-31 05:01:15.567586178 +0000 UTC m=+1.010736219 container init 9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 00:01:15 np0005603435 podman[272776]: 2026-01-31 05:01:15.578411038 +0000 UTC m=+1.021561079 container start 9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_euclid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 00:01:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 102 KiB/s rd, 448 KiB/s wr, 101 op/s
Jan 31 00:01:15 np0005603435 great_euclid[272793]: 167 167
Jan 31 00:01:15 np0005603435 systemd[1]: libpod-9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de.scope: Deactivated successfully.
Jan 31 00:01:15 np0005603435 podman[272776]: 2026-01-31 05:01:15.706702536 +0000 UTC m=+1.149852567 container attach 9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_euclid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 00:01:15 np0005603435 podman[272776]: 2026-01-31 05:01:15.707552326 +0000 UTC m=+1.150702327 container died 9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_euclid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:01:15 np0005603435 nova_compute[239938]: 2026-01-31 05:01:15.745 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:16 np0005603435 systemd[1]: var-lib-containers-storage-overlay-1c35ba2ec0b79be5a5a819c986f0897f1517f226ee39a4acd69db334578d66b7-merged.mount: Deactivated successfully.
Jan 31 00:01:16 np0005603435 podman[272776]: 2026-01-31 05:01:16.707025783 +0000 UTC m=+2.150175824 container remove 9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 00:01:16 np0005603435 systemd[1]: libpod-conmon-9924fade6ce83dc454e08faa63ff937ce347b8d97a5c45a1bc3b0ec96450b6de.scope: Deactivated successfully.
Jan 31 00:01:16 np0005603435 podman[272817]: 2026-01-31 05:01:16.851293934 +0000 UTC m=+0.032943192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:01:17 np0005603435 podman[272817]: 2026-01-31 05:01:17.013955196 +0000 UTC m=+0.195604403 container create b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 00:01:17 np0005603435 systemd[1]: Started libpod-conmon-b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0.scope.
Jan 31 00:01:17 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:01:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce5cdd70672541a9a36250efbcf212d22d9381239e9df3004cad7b809ddc132/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce5cdd70672541a9a36250efbcf212d22d9381239e9df3004cad7b809ddc132/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce5cdd70672541a9a36250efbcf212d22d9381239e9df3004cad7b809ddc132/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce5cdd70672541a9a36250efbcf212d22d9381239e9df3004cad7b809ddc132/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:17 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce5cdd70672541a9a36250efbcf212d22d9381239e9df3004cad7b809ddc132/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:17 np0005603435 podman[272817]: 2026-01-31 05:01:17.174313353 +0000 UTC m=+0.355962610 container init b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:01:17 np0005603435 podman[272817]: 2026-01-31 05:01:17.180635795 +0000 UTC m=+0.362285012 container start b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 31 00:01:17 np0005603435 podman[272817]: 2026-01-31 05:01:17.198095044 +0000 UTC m=+0.379744261 container attach b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.36073424641164e-06 of space, bias 1.0, pg target 0.0025082202739234918 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03446500442794519 of space, bias 1.0, pg target 10.339501328383555 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.4210567038421123e-06 of space, bias 1.0, pg target 0.00041210644411421256 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672825875083256 of space, bias 1.0, pg target 0.19351195037741442 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.249869684754789e-07 of space, bias 4.0, pg target 0.0009569848834315555 quantized to 16 (current 16)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Jan 31 00:01:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 2.2 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 99 KiB/s rd, 7.2 MiB/s wr, 106 op/s
Jan 31 00:01:17 np0005603435 strange_booth[272834]: --> passed data devices: 0 physical, 3 LVM
Jan 31 00:01:17 np0005603435 strange_booth[272834]: --> All data devices are unavailable
Jan 31 00:01:17 np0005603435 systemd[1]: libpod-b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0.scope: Deactivated successfully.
Jan 31 00:01:17 np0005603435 podman[272817]: 2026-01-31 05:01:17.660892106 +0000 UTC m=+0.842541363 container died b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 00:01:17 np0005603435 systemd[1]: var-lib-containers-storage-overlay-5ce5cdd70672541a9a36250efbcf212d22d9381239e9df3004cad7b809ddc132-merged.mount: Deactivated successfully.
Jan 31 00:01:17 np0005603435 podman[272817]: 2026-01-31 05:01:17.925785471 +0000 UTC m=+1.107434688 container remove b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_booth, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 00:01:17 np0005603435 systemd[1]: libpod-conmon-b74a1e2297ff853a349bb42b5cb1ef7f354557264af8ddd8b6a342fd439f00d0.scope: Deactivated successfully.
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3995934196' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3995934196' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:18 np0005603435 podman[272929]: 2026-01-31 05:01:18.511034962 +0000 UTC m=+0.127352207 container create 4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 00:01:18 np0005603435 podman[272929]: 2026-01-31 05:01:18.422043536 +0000 UTC m=+0.038360791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:01:18 np0005603435 systemd[1]: Started libpod-conmon-4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93.scope.
Jan 31 00:01:18 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:01:18 np0005603435 podman[272929]: 2026-01-31 05:01:18.6406361 +0000 UTC m=+0.256953395 container init 4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:01:18 np0005603435 podman[272929]: 2026-01-31 05:01:18.647796122 +0000 UTC m=+0.264113337 container start 4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 00:01:18 np0005603435 podman[272929]: 2026-01-31 05:01:18.653030988 +0000 UTC m=+0.269348233 container attach 4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 00:01:18 np0005603435 upbeat_mahavira[272945]: 167 167
Jan 31 00:01:18 np0005603435 systemd[1]: libpod-4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93.scope: Deactivated successfully.
Jan 31 00:01:18 np0005603435 podman[272929]: 2026-01-31 05:01:18.657671779 +0000 UTC m=+0.273989024 container died 4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 00:01:18 np0005603435 systemd[1]: var-lib-containers-storage-overlay-30ace3c94ecd3e7c643099206cd04a36aa8dbf67269d4a614306df869adb9e56-merged.mount: Deactivated successfully.
Jan 31 00:01:18 np0005603435 podman[272929]: 2026-01-31 05:01:18.703344005 +0000 UTC m=+0.319661260 container remove 4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 00:01:18 np0005603435 systemd[1]: libpod-conmon-4e1aa50c073994f53c658bd57e8c481ab946d8f371964b9954ba07d83b075d93.scope: Deactivated successfully.
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e474 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/895153006' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/895153006' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:18 np0005603435 podman[272972]: 2026-01-31 05:01:18.927138264 +0000 UTC m=+0.105577414 container create 41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 00:01:18 np0005603435 podman[272972]: 2026-01-31 05:01:18.854086431 +0000 UTC m=+0.032525601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:01:18 np0005603435 systemd[1]: Started libpod-conmon-41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262.scope.
Jan 31 00:01:19 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:01:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f5389a963f17391f00e353d8e062c20ff73688ea586c926d2306315016c666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f5389a963f17391f00e353d8e062c20ff73688ea586c926d2306315016c666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f5389a963f17391f00e353d8e062c20ff73688ea586c926d2306315016c666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:19 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f5389a963f17391f00e353d8e062c20ff73688ea586c926d2306315016c666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:19 np0005603435 podman[272972]: 2026-01-31 05:01:19.098428603 +0000 UTC m=+0.276867793 container init 41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_agnesi, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:01:19 np0005603435 podman[272972]: 2026-01-31 05:01:19.109757165 +0000 UTC m=+0.288196315 container start 41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 00:01:19 np0005603435 podman[272972]: 2026-01-31 05:01:19.129008507 +0000 UTC m=+0.307447717 container attach 41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]: {
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:    "0": [
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:        {
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "devices": [
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "/dev/loop3"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            ],
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_name": "ceph_lv0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_size": "21470642176",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "name": "ceph_lv0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "tags": {
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cluster_name": "ceph",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.crush_device_class": "",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.encrypted": "0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.objectstore": "bluestore",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osd_id": "0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.type": "block",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.vdo": "0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.with_tpm": "0"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            },
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "type": "block",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "vg_name": "ceph_vg0"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:        }
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:    ],
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:    "1": [
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:        {
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "devices": [
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "/dev/loop4"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            ],
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_name": "ceph_lv1",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_size": "21470642176",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "name": "ceph_lv1",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "tags": {
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cluster_name": "ceph",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.crush_device_class": "",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.encrypted": "0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.objectstore": "bluestore",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osd_id": "1",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.type": "block",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.vdo": "0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.with_tpm": "0"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            },
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "type": "block",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "vg_name": "ceph_vg1"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:        }
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:    ],
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:    "2": [
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:        {
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "devices": [
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "/dev/loop5"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            ],
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_name": "ceph_lv2",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_size": "21470642176",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "name": "ceph_lv2",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "tags": {
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.cluster_name": "ceph",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.crush_device_class": "",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.encrypted": "0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.objectstore": "bluestore",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osd_id": "2",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.type": "block",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.vdo": "0",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:                "ceph.with_tpm": "0"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            },
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "type": "block",
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:            "vg_name": "ceph_vg2"
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:        }
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]:    ]
Jan 31 00:01:19 np0005603435 bold_agnesi[272989]: }
Jan 31 00:01:19 np0005603435 systemd[1]: libpod-41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262.scope: Deactivated successfully.
Jan 31 00:01:19 np0005603435 podman[272972]: 2026-01-31 05:01:19.439809273 +0000 UTC m=+0.618248403 container died 41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:01:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-84f5389a963f17391f00e353d8e062c20ff73688ea586c926d2306315016c666-merged.mount: Deactivated successfully.
Jan 31 00:01:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 2.2 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 110 KiB/s rd, 9.0 MiB/s wr, 113 op/s
Jan 31 00:01:19 np0005603435 podman[272972]: 2026-01-31 05:01:19.622139507 +0000 UTC m=+0.800578637 container remove 41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 00:01:19 np0005603435 systemd[1]: libpod-conmon-41f427379d755b3c2d08a75c8f5ef3b846db2ea0227d1b145f4696a039064262.scope: Deactivated successfully.
Jan 31 00:01:20 np0005603435 podman[273074]: 2026-01-31 05:01:20.240927682 +0000 UTC m=+0.078633978 container create bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 00:01:20 np0005603435 nova_compute[239938]: 2026-01-31 05:01:20.257 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:20 np0005603435 podman[273074]: 2026-01-31 05:01:20.195249396 +0000 UTC m=+0.032955672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:01:20 np0005603435 systemd[1]: Started libpod-conmon-bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131.scope.
Jan 31 00:01:20 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:01:20 np0005603435 podman[273074]: 2026-01-31 05:01:20.427016216 +0000 UTC m=+0.264722562 container init bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 00:01:20 np0005603435 podman[273074]: 2026-01-31 05:01:20.434143097 +0000 UTC m=+0.271849353 container start bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_shamir, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 00:01:20 np0005603435 festive_shamir[273090]: 167 167
Jan 31 00:01:20 np0005603435 systemd[1]: libpod-bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131.scope: Deactivated successfully.
Jan 31 00:01:20 np0005603435 podman[273074]: 2026-01-31 05:01:20.45766443 +0000 UTC m=+0.295370766 container attach bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:01:20 np0005603435 podman[273074]: 2026-01-31 05:01:20.45805286 +0000 UTC m=+0.295759146 container died bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 00:01:20 np0005603435 systemd[1]: var-lib-containers-storage-overlay-4a2d9bb5f18e5562e21054a55190f53d57ac086942bd0f54e25216949c5e7c54-merged.mount: Deactivated successfully.
Jan 31 00:01:20 np0005603435 nova_compute[239938]: 2026-01-31 05:01:20.747 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:20 np0005603435 podman[273074]: 2026-01-31 05:01:20.835466304 +0000 UTC m=+0.673172590 container remove bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_shamir, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 00:01:20 np0005603435 systemd[1]: libpod-conmon-bb053e5fae53b1e65cfdf0a5690e877e9e168048c0653055c35025a2777a7131.scope: Deactivated successfully.
Jan 31 00:01:20 np0005603435 podman[273114]: 2026-01-31 05:01:20.979734505 +0000 UTC m=+0.046659781 container create 5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 00:01:21 np0005603435 systemd[1]: Started libpod-conmon-5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216.scope.
Jan 31 00:01:21 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:01:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84f59922a8c776100c77d37e37bee2d093c2881fb3c4ac6a0d512da4438d169/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84f59922a8c776100c77d37e37bee2d093c2881fb3c4ac6a0d512da4438d169/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84f59922a8c776100c77d37e37bee2d093c2881fb3c4ac6a0d512da4438d169/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:21 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84f59922a8c776100c77d37e37bee2d093c2881fb3c4ac6a0d512da4438d169/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:01:21 np0005603435 podman[273114]: 2026-01-31 05:01:21.046499996 +0000 UTC m=+0.113425292 container init 5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 00:01:21 np0005603435 podman[273114]: 2026-01-31 05:01:20.951886567 +0000 UTC m=+0.018811873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:01:21 np0005603435 podman[273114]: 2026-01-31 05:01:21.056206859 +0000 UTC m=+0.123132175 container start 5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 31 00:01:21 np0005603435 podman[273114]: 2026-01-31 05:01:21.061196189 +0000 UTC m=+0.128121505 container attach 5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 00:01:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 116 KiB/s rd, 21 MiB/s wr, 176 op/s
Jan 31 00:01:21 np0005603435 lvm[273206]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 00:01:21 np0005603435 lvm[273209]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 00:01:21 np0005603435 lvm[273209]: VG ceph_vg1 finished
Jan 31 00:01:21 np0005603435 lvm[273206]: VG ceph_vg0 finished
Jan 31 00:01:21 np0005603435 lvm[273212]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 00:01:21 np0005603435 lvm[273212]: VG ceph_vg2 finished
Jan 31 00:01:21 np0005603435 elated_edison[273130]: {}
Jan 31 00:01:21 np0005603435 systemd[1]: libpod-5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216.scope: Deactivated successfully.
Jan 31 00:01:21 np0005603435 systemd[1]: libpod-5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216.scope: Consumed 1.048s CPU time.
Jan 31 00:01:21 np0005603435 podman[273114]: 2026-01-31 05:01:21.894943331 +0000 UTC m=+0.961868647 container died 5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 31 00:01:22 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f84f59922a8c776100c77d37e37bee2d093c2881fb3c4ac6a0d512da4438d169-merged.mount: Deactivated successfully.
Jan 31 00:01:22 np0005603435 podman[273114]: 2026-01-31 05:01:22.031282081 +0000 UTC m=+1.098207347 container remove 5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:01:22 np0005603435 systemd[1]: libpod-conmon-5837e0674f6fe489b449c55f68c6472d91275322014134a34772a80e406a3216.scope: Deactivated successfully.
Jan 31 00:01:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 00:01:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 00:01:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:01:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 2.7 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 112 KiB/s rd, 56 MiB/s wr, 177 op/s
Jan 31 00:01:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e475 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:23 np0005603435 nova_compute[239938]: 2026-01-31 05:01:23.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:23 np0005603435 nova_compute[239938]: 2026-01-31 05:01:23.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:23 np0005603435 nova_compute[239938]: 2026-01-31 05:01:23.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 00:01:23 np0005603435 nova_compute[239938]: 2026-01-31 05:01:23.949 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 00:01:24 np0005603435 nova_compute[239938]: 2026-01-31 05:01:24.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:24 np0005603435 nova_compute[239938]: 2026-01-31 05:01:24.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:24 np0005603435 nova_compute[239938]: 2026-01-31 05:01:24.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 00:01:25 np0005603435 nova_compute[239938]: 2026-01-31 05:01:25.162 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835670.161008, 79e4d808-e888-48d3-8b42-c6e0d9350d37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:01:25 np0005603435 nova_compute[239938]: 2026-01-31 05:01:25.162 239942 INFO nova.compute.manager [-] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] VM Stopped (Lifecycle Event)#033[00m
Jan 31 00:01:25 np0005603435 nova_compute[239938]: 2026-01-31 05:01:25.187 239942 DEBUG nova.compute.manager [None req-6cb50083-a5ff-4310-bdc8-89633bac4863 - - - - - -] [instance: 79e4d808-e888-48d3-8b42-c6e0d9350d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:01:25 np0005603435 nova_compute[239938]: 2026-01-31 05:01:25.259 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 2.9 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 172 KiB/s rd, 74 MiB/s wr, 279 op/s
Jan 31 00:01:25 np0005603435 nova_compute[239938]: 2026-01-31 05:01:25.749 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:25 np0005603435 nova_compute[239938]: 2026-01-31 05:01:25.908 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:26 np0005603435 nova_compute[239938]: 2026-01-31 05:01:26.882 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 2.5 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 989 KiB/s rd, 95 MiB/s wr, 302 op/s
Jan 31 00:01:27 np0005603435 nova_compute[239938]: 2026-01-31 05:01:27.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Jan 31 00:01:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Jan 31 00:01:27 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Jan 31 00:01:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.908 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.908 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.947 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.948 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.948 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.948 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 00:01:28 np0005603435 nova_compute[239938]: 2026-01-31 05:01:28.949 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3862982909' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3862982909' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:01:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595977364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:01:29 np0005603435 nova_compute[239938]: 2026-01-31 05:01:29.465 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 2.5 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 989 KiB/s rd, 95 MiB/s wr, 302 op/s
Jan 31 00:01:29 np0005603435 nova_compute[239938]: 2026-01-31 05:01:29.657 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:01:29 np0005603435 nova_compute[239938]: 2026-01-31 05:01:29.659 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4340MB free_disk=59.987779148854315GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 00:01:29 np0005603435 nova_compute[239938]: 2026-01-31 05:01:29.660 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:29 np0005603435 nova_compute[239938]: 2026-01-31 05:01:29.660 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:29 np0005603435 nova_compute[239938]: 2026-01-31 05:01:29.780 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 00:01:29 np0005603435 nova_compute[239938]: 2026-01-31 05:01:29.781 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 00:01:29 np0005603435 nova_compute[239938]: 2026-01-31 05:01:29.806 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.262 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:01:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050622540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.360 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.367 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.384 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.407 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.407 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.751 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 00:01:30 np0005603435 nova_compute[239938]: 2026-01-31 05:01:30.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Jan 31 00:01:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Jan 31 00:01:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Jan 31 00:01:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 2.2 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 1.7 MiB/s rd, 58 MiB/s wr, 255 op/s
Jan 31 00:01:31 np0005603435 nova_compute[239938]: 2026-01-31 05:01:31.903 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:01:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Jan 31 00:01:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Jan 31 00:01:32 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Jan 31 00:01:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384646279' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384646279' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/338042396' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/338042396' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 2.1 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.7 KiB/s wr, 90 op/s
Jan 31 00:01:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e478 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Jan 31 00:01:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Jan 31 00:01:34 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Jan 31 00:01:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Jan 31 00:01:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Jan 31 00:01:35 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Jan 31 00:01:35 np0005603435 nova_compute[239938]: 2026-01-31 05:01:35.264 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 5 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 294 active+clean; 1.9 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.4 KiB/s wr, 122 op/s
Jan 31 00:01:35 np0005603435 nova_compute[239938]: 2026-01-31 05:01:35.784 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1975830666' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1975830666' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Jan 31 00:01:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Jan 31 00:01:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Jan 31 00:01:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/403524070' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:36 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/403524070' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:01:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:01:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:01:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:01:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:01:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:01:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 468 MiB data, 1.5 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 54 KiB/s wr, 227 op/s
Jan 31 00:01:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Jan 31 00:01:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Jan 31 00:01:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Jan 31 00:01:39 np0005603435 podman[273296]: 2026-01-31 05:01:39.114544706 +0000 UTC m=+0.063992586 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 00:01:39 np0005603435 podman[273297]: 2026-01-31 05:01:39.143267755 +0000 UTC m=+0.092727856 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 00:01:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 468 MiB data, 1.5 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 53 KiB/s wr, 210 op/s
Jan 31 00:01:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:39.834 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:01:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:39.834 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 00:01:39 np0005603435 nova_compute[239938]: 2026-01-31 05:01:39.835 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:39 np0005603435 nova_compute[239938]: 2026-01-31 05:01:39.969 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:40 np0005603435 nova_compute[239938]: 2026-01-31 05:01:40.082 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:40 np0005603435 nova_compute[239938]: 2026-01-31 05:01:40.265 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:40 np0005603435 nova_compute[239938]: 2026-01-31 05:01:40.786 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 88 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 45 KiB/s wr, 194 op/s
Jan 31 00:01:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 88 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 36 KiB/s wr, 132 op/s
Jan 31 00:01:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Jan 31 00:01:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Jan 31 00:01:43 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Jan 31 00:01:45 np0005603435 nova_compute[239938]: 2026-01-31 05:01:45.267 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 88 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 127 B/s wr, 16 op/s
Jan 31 00:01:45 np0005603435 nova_compute[239938]: 2026-01-31 05:01:45.787 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Jan 31 00:01:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Jan 31 00:01:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Jan 31 00:01:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 88 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 767 B/s wr, 21 op/s
Jan 31 00:01:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:49 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 00:01:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 88 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 767 B/s wr, 8 op/s
Jan 31 00:01:49 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:49.838 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Jan 31 00:01:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Jan 31 00:01:49 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Jan 31 00:01:50 np0005603435 nova_compute[239938]: 2026-01-31 05:01:50.270 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:50 np0005603435 nova_compute[239938]: 2026-01-31 05:01:50.800 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 124 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.6 MiB/s wr, 36 op/s
Jan 31 00:01:52 np0005603435 nova_compute[239938]: 2026-01-31 05:01:52.282 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "28de1d7d-8395-4b6a-b203-54bc32800fee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:52 np0005603435 nova_compute[239938]: 2026-01-31 05:01:52.283 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:52 np0005603435 nova_compute[239938]: 2026-01-31 05:01:52.320 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 00:01:52 np0005603435 nova_compute[239938]: 2026-01-31 05:01:52.415 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:52 np0005603435 nova_compute[239938]: 2026-01-31 05:01:52.415 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:52 np0005603435 nova_compute[239938]: 2026-01-31 05:01:52.424 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 00:01:52 np0005603435 nova_compute[239938]: 2026-01-31 05:01:52.424 239942 INFO nova.compute.claims [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 00:01:52 np0005603435 nova_compute[239938]: 2026-01-31 05:01:52.538 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Jan 31 00:01:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Jan 31 00:01:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Jan 31 00:01:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:01:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244147061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.078 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.086 239942 DEBUG nova.compute.provider_tree [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.114 239942 DEBUG nova.scheduler.client.report [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.145 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.147 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.225 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.226 239942 DEBUG nova.network.neutron [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.256 239942 INFO nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.292 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.459 239942 INFO nova.virt.block_device [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Booting with volume 3abc480e-e62d-4eff-b2ab-639c5d2ce2a3 at /dev/vda#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.481 239942 DEBUG nova.policy [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6784d92c92b24526a302a1a74a813c76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48935f8745744c4ba5400c13f80e0379', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 00:01:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 202 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 14 MiB/s wr, 98 op/s
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.619 239942 DEBUG os_brick.utils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.620 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.631 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.631 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[d334b66f-1667-414a-b8a5-941349803733]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.633 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.640 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.640 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[2b499803-ffec-4a95-a927-b9c8f074af5e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.642 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.648 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.649 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[cec85a35-f346-429e-822e-07b98824298a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.650 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[9dea4bf7-dac9-4840-989a-f4603073d71f]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.651 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.676 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.679 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.680 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.680 239942 DEBUG os_brick.initiator.connectors.lightos [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.681 239942 DEBUG os_brick.utils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 00:01:53 np0005603435 nova_compute[239938]: 2026-01-31 05:01:53.682 239942 DEBUG nova.virt.block_device [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Updating existing volume attachment record: 210d8c7a-182b-49a3-a633-70da002fc40b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 00:01:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e486 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505013818' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505013818' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:54 np0005603435 nova_compute[239938]: 2026-01-31 05:01:54.213 239942 DEBUG nova.network.neutron [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Successfully created port: a39cd9f4-e464-424a-85e2-9a5c357fe652 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 00:01:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:01:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079143473' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.272 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 202 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 14 MiB/s wr, 120 op/s
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.730 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.732 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.733 239942 INFO nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Creating image(s)#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.733 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.734 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Ensure instance console log exists: /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.734 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.735 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.735 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.803 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.869 239942 DEBUG nova.network.neutron [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Successfully updated port: a39cd9f4-e464-424a-85e2-9a5c357fe652 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.887 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.888 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquired lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.888 239942 DEBUG nova.network.neutron [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 00:01:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:55.925 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:55.927 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:55.927 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.990 239942 DEBUG nova.compute.manager [req-2319a47a-529f-4840-b3da-b1d9706d0c00 req-5c2457af-ed7c-48a5-a433-9a6c99a60da0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received event network-changed-a39cd9f4-e464-424a-85e2-9a5c357fe652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.991 239942 DEBUG nova.compute.manager [req-2319a47a-529f-4840-b3da-b1d9706d0c00 req-5c2457af-ed7c-48a5-a433-9a6c99a60da0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Refreshing instance network info cache due to event network-changed-a39cd9f4-e464-424a-85e2-9a5c357fe652. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 00:01:55 np0005603435 nova_compute[239938]: 2026-01-31 05:01:55.992 239942 DEBUG oslo_concurrency.lockutils [req-2319a47a-529f-4840-b3da-b1d9706d0c00 req-5c2457af-ed7c-48a5-a433-9a6c99a60da0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:01:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Jan 31 00:01:56 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Jan 31 00:01:56 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Jan 31 00:01:56 np0005603435 nova_compute[239938]: 2026-01-31 05:01:56.055 239942 DEBUG nova.network.neutron [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 00:01:56 np0005603435 nova_compute[239938]: 2026-01-31 05:01:56.983 239942 DEBUG nova.network.neutron [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Updating instance_info_cache with network_info: [{"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.003 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Releasing lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.003 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Instance network_info: |[{"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.004 239942 DEBUG oslo_concurrency.lockutils [req-2319a47a-529f-4840-b3da-b1d9706d0c00 req-5c2457af-ed7c-48a5-a433-9a6c99a60da0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.004 239942 DEBUG nova.network.neutron [req-2319a47a-529f-4840-b3da-b1d9706d0c00 req-5c2457af-ed7c-48a5-a433-9a6c99a60da0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Refreshing network info cache for port a39cd9f4-e464-424a-85e2-9a5c357fe652 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.010 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Start _get_guest_xml network_info=[{"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '210d8c7a-182b-49a3-a633-70da002fc40b', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3abc480e-e62d-4eff-b2ab-639c5d2ce2a3', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3abc480e-e62d-4eff-b2ab-639c5d2ce2a3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '28de1d7d-8395-4b6a-b203-54bc32800fee', 'attached_at': '', 'detached_at': '', 'volume_id': '3abc480e-e62d-4eff-b2ab-639c5d2ce2a3', 'serial': '3abc480e-e62d-4eff-b2ab-639c5d2ce2a3'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.015 239942 WARNING nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.028 239942 DEBUG nova.virt.libvirt.host [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.028 239942 DEBUG nova.virt.libvirt.host [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.033 239942 DEBUG nova.virt.libvirt.host [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.033 239942 DEBUG nova.virt.libvirt.host [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.034 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.034 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.035 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.036 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.036 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.036 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.037 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.037 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.037 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.038 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.038 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.038 239942 DEBUG nova.virt.hardware [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.076 239942 DEBUG nova.storage.rbd_utils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 28de1d7d-8395-4b6a-b203-54bc32800fee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.082 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:01:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3220769300' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:01:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:01:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3220769300' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:01:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 202 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 14 MiB/s wr, 139 op/s
Jan 31 00:01:57 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:01:57 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2184062717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.624 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.756 239942 DEBUG os_brick.encryptors [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Using volume encryption metadata '{'encryption_key_id': 'b0942cd0-024c-414f-ab07-785dcf087d6a', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3abc480e-e62d-4eff-b2ab-639c5d2ce2a3', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3abc480e-e62d-4eff-b2ab-639c5d2ce2a3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '28de1d7d-8395-4b6a-b203-54bc32800fee', 'attached_at': '', 'detached_at': '', 'volume_id': '3abc480e-e62d-4eff-b2ab-639c5d2ce2a3', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.759 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.781 239942 DEBUG barbicanclient.v1.secrets [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/b0942cd0-024c-414f-ab07-785dcf087d6a get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.782 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.816 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.816 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.844 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.844 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.864 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.864 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.906 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.907 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.939 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.939 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.969 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:57 np0005603435 nova_compute[239938]: 2026-01-31 05:01:57.969 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.069 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.070 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.089 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.089 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.111 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.112 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.139 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.140 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.171 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.172 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.196 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.197 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.218 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.219 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.249 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.250 239942 INFO barbicanclient.base [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/b0942cd0-024c-414f-ab07-785dcf087d6a#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.280 239942 DEBUG barbicanclient.client [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.280 239942 DEBUG nova.virt.libvirt.host [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <volume>3abc480e-e62d-4eff-b2ab-639c5d2ce2a3</volume>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  </usage>
Jan 31 00:01:58 np0005603435 nova_compute[239938]: </secret>
Jan 31 00:01:58 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.316 239942 DEBUG nova.virt.libvirt.vif [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T05:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-692053603',display_name='tempest-TestEncryptedCinderVolumes-server-692053603',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-692053603',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmr0MUFNJjz18mvNHr0kofSqXL+MOUCKmtJGcrQVuZqzDEVyxUUFebchvjqsqS9tyThgYSCkXKWLzTW0ED0WOyTQNQBDzi5dd8NYQAYU+nK8F6As1qr5NixmuIDexDl8Q==',key_name='tempest-TestEncryptedCinderVolumes-1017268198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-gi4xc2ud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T05:01:53Z,user_data=None,user_id='6784d92c92b24526a302a1a74a813c76',uuid=28de1d7d-8395-4b6a-b203-54bc32800fee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.317 239942 DEBUG nova.network.os_vif_util [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.318 239942 DEBUG nova.network.os_vif_util [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:dc:fd,bridge_name='br-int',has_traffic_filtering=True,id=a39cd9f4-e464-424a-85e2-9a5c357fe652,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa39cd9f4-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.321 239942 DEBUG nova.objects.instance [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28de1d7d-8395-4b6a-b203-54bc32800fee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.342 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] End _get_guest_xml xml=<domain type="kvm">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <uuid>28de1d7d-8395-4b6a-b203-54bc32800fee</uuid>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <name>instance-0000001c</name>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <metadata>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-692053603</nova:name>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 05:01:57</nova:creationTime>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <nova:user uuid="6784d92c92b24526a302a1a74a813c76">tempest-TestEncryptedCinderVolumes-1466370108-project-member</nova:user>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <nova:project uuid="48935f8745744c4ba5400c13f80e0379">tempest-TestEncryptedCinderVolumes-1466370108</nova:project>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <nova:port uuid="a39cd9f4-e464-424a-85e2-9a5c357fe652">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        </nova:port>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  </metadata>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <system>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <entry name="serial">28de1d7d-8395-4b6a-b203-54bc32800fee</entry>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <entry name="uuid">28de1d7d-8395-4b6a-b203-54bc32800fee</entry>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </system>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <os>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  </os>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <features>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <acpi/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <apic/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  </features>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  </clock>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  </cpu>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  <devices>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/28de1d7d-8395-4b6a-b203-54bc32800fee_disk.config">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-3abc480e-e62d-4eff-b2ab-639c5d2ce2a3">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <serial>3abc480e-e62d-4eff-b2ab-639c5d2ce2a3</serial>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <encryption format="luks">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:        <secret type="passphrase" uuid="3c2c9334-47fe-4f90-b59a-6c320ad3e35f"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      </encryption>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:23:dc:fd"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <target dev="tapa39cd9f4-e4"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </interface>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee/console.log" append="off"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </serial>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <video>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </video>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </rng>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 31 00:01:58 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:    </memballoon>
Jan 31 00:01:58 np0005603435 nova_compute[239938]:  </devices>
Jan 31 00:01:58 np0005603435 nova_compute[239938]: </domain>
Jan 31 00:01:58 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.343 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Preparing to wait for external event network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.343 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.343 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.344 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.345 239942 DEBUG nova.virt.libvirt.vif [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T05:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-692053603',display_name='tempest-TestEncryptedCinderVolumes-server-692053603',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-692053603',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmr0MUFNJjz18mvNHr0kofSqXL+MOUCKmtJGcrQVuZqzDEVyxUUFebchvjqsqS9tyThgYSCkXKWLzTW0ED0WOyTQNQBDzi5dd8NYQAYU+nK8F6As1qr5NixmuIDexDl8Q==',key_name='tempest-TestEncryptedCinderVolumes-1017268198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-gi4xc2ud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T05:01:53Z,user_data=None,user_id='6784d92c92b24526a302a1a74a813c76',uuid=28de1d7d-8395-4b6a-b203-54bc32800fee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.345 239942 DEBUG nova.network.os_vif_util [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.346 239942 DEBUG nova.network.os_vif_util [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:dc:fd,bridge_name='br-int',has_traffic_filtering=True,id=a39cd9f4-e464-424a-85e2-9a5c357fe652,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa39cd9f4-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.347 239942 DEBUG os_vif [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:dc:fd,bridge_name='br-int',has_traffic_filtering=True,id=a39cd9f4-e464-424a-85e2-9a5c357fe652,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa39cd9f4-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.348 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.349 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.349 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.352 239942 DEBUG nova.network.neutron [req-2319a47a-529f-4840-b3da-b1d9706d0c00 req-5c2457af-ed7c-48a5-a433-9a6c99a60da0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Updated VIF entry in instance network info cache for port a39cd9f4-e464-424a-85e2-9a5c357fe652. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.352 239942 DEBUG nova.network.neutron [req-2319a47a-529f-4840-b3da-b1d9706d0c00 req-5c2457af-ed7c-48a5-a433-9a6c99a60da0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Updating instance_info_cache with network_info: [{"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.356 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.356 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa39cd9f4-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.357 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa39cd9f4-e4, col_values=(('external_ids', {'iface-id': 'a39cd9f4-e464-424a-85e2-9a5c357fe652', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:dc:fd', 'vm-uuid': '28de1d7d-8395-4b6a-b203-54bc32800fee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.359 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:58 np0005603435 NetworkManager[49097]: <info>  [1769835718.3608] manager: (tapa39cd9f4-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.362 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.367 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.369 239942 INFO os_vif [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:dc:fd,bridge_name='br-int',has_traffic_filtering=True,id=a39cd9f4-e464-424a-85e2-9a5c357fe652,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa39cd9f4-e4')#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.372 239942 DEBUG oslo_concurrency.lockutils [req-2319a47a-529f-4840-b3da-b1d9706d0c00 req-5c2457af-ed7c-48a5-a433-9a6c99a60da0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.426 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.426 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.427 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No VIF found with MAC fa:16:3e:23:dc:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.428 239942 INFO nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Using config drive#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.461 239942 DEBUG nova.storage.rbd_utils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 28de1d7d-8395-4b6a-b203-54bc32800fee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e487 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.836000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835718836036, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1205, "num_deletes": 259, "total_data_size": 1642658, "memory_usage": 1669760, "flush_reason": "Manual Compaction"}
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835718844429, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1612907, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35085, "largest_seqno": 36289, "table_properties": {"data_size": 1606865, "index_size": 3309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12968, "raw_average_key_size": 20, "raw_value_size": 1594727, "raw_average_value_size": 2487, "num_data_blocks": 146, "num_entries": 641, "num_filter_entries": 641, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769835634, "oldest_key_time": 1769835634, "file_creation_time": 1769835718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 8479 microseconds, and 4404 cpu microseconds.
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.844479) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1612907 bytes OK
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.844499) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.846426) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.846448) EVENT_LOG_v1 {"time_micros": 1769835718846441, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.846471) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1637000, prev total WAL file size 1637000, number of live WAL files 2.
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.847090) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303139' seq:72057594037927935, type:22 .. '6C6F676D0031323730' seq:0, type:0; will stop at (end)
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1575KB)], [71(9815KB)]
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835718847132, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11664427, "oldest_snapshot_seqno": -1}
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.865 239942 INFO nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Creating config drive at /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee/disk.config#033[00m
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.871 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzexkoir3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6839 keys, 11511755 bytes, temperature: kUnknown
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835718915636, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11511755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11459008, "index_size": 34523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 172140, "raw_average_key_size": 25, "raw_value_size": 11329288, "raw_average_value_size": 1656, "num_data_blocks": 1386, "num_entries": 6839, "num_filter_entries": 6839, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769835718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.915977) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11511755 bytes
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.917741) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.0 rd, 167.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.6 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(14.4) write-amplify(7.1) OK, records in: 7372, records dropped: 533 output_compression: NoCompression
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.917763) EVENT_LOG_v1 {"time_micros": 1769835718917751, "job": 40, "event": "compaction_finished", "compaction_time_micros": 68604, "compaction_time_cpu_micros": 32314, "output_level": 6, "num_output_files": 1, "total_output_size": 11511755, "num_input_records": 7372, "num_output_records": 6839, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835718918263, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835718920007, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.846981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.920135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.920142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.920144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.920147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:01:58 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:01:58.920149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:01:58 np0005603435 nova_compute[239938]: 2026-01-31 05:01:58.997 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzexkoir3" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.074 239942 DEBUG nova.storage.rbd_utils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image 28de1d7d-8395-4b6a-b203-54bc32800fee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.081 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee/disk.config 28de1d7d-8395-4b6a-b203-54bc32800fee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.215 239942 DEBUG oslo_concurrency.processutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee/disk.config 28de1d7d-8395-4b6a-b203-54bc32800fee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.217 239942 INFO nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Deleting local config drive /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee/disk.config because it was imported into RBD.#033[00m
Jan 31 00:01:59 np0005603435 kernel: tapa39cd9f4-e4: entered promiscuous mode
Jan 31 00:01:59 np0005603435 NetworkManager[49097]: <info>  [1769835719.2838] manager: (tapa39cd9f4-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Jan 31 00:01:59 np0005603435 ovn_controller[145670]: 2026-01-31T05:01:59Z|00267|binding|INFO|Claiming lport a39cd9f4-e464-424a-85e2-9a5c357fe652 for this chassis.
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.305 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:59 np0005603435 ovn_controller[145670]: 2026-01-31T05:01:59Z|00268|binding|INFO|a39cd9f4-e464-424a-85e2-9a5c357fe652: Claiming fa:16:3e:23:dc:fd 10.100.0.10
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.312 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.319 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:dc:fd 10.100.0.10'], port_security=['fa:16:3e:23:dc:fd 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '28de1d7d-8395-4b6a-b203-54bc32800fee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f25b83f-b794-417e-88e7-d89c680f473d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48935f8745744c4ba5400c13f80e0379', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b59f016-9fba-4b72-aa35-0db4493e20dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94c57d33-0e3a-4b86-87cd-ae1ca9bb064d, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=a39cd9f4-e464-424a-85e2-9a5c357fe652) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.321 156017 INFO neutron.agent.ovn.metadata.agent [-] Port a39cd9f4-e464-424a-85e2-9a5c357fe652 in datapath 2f25b83f-b794-417e-88e7-d89c680f473d bound to our chassis#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.323 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f25b83f-b794-417e-88e7-d89c680f473d#033[00m
Jan 31 00:01:59 np0005603435 systemd-udevd[273483]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.335 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bd03fcd7-60bc-4796-b1a1-20c7784a9833]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.336 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f25b83f-b1 in ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.338 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f25b83f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.338 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[cb294250-ad20-4672-868d-021943da5ebf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 systemd-machined[208030]: New machine qemu-28-instance-0000001c.
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.341 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a4bb0b-e361-4bcf-b3e7-531b80a50d23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 NetworkManager[49097]: <info>  [1769835719.3488] device (tapa39cd9f4-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 00:01:59 np0005603435 NetworkManager[49097]: <info>  [1769835719.3497] device (tapa39cd9f4-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 00:01:59 np0005603435 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.356 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[83baaf45-89d8-4bf7-b922-0591dfefea4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_controller[145670]: 2026-01-31T05:01:59Z|00269|binding|INFO|Setting lport a39cd9f4-e464-424a-85e2-9a5c357fe652 ovn-installed in OVS
Jan 31 00:01:59 np0005603435 ovn_controller[145670]: 2026-01-31T05:01:59Z|00270|binding|INFO|Setting lport a39cd9f4-e464-424a-85e2-9a5c357fe652 up in Southbound
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.363 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.379 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1020550e-09a9-4e8f-a8e6-6874d1adbbd5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.402 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[4b21d0e3-823d-4d6a-9a9e-5c36d4c3bee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.409 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4fd466-b312-4e89-87b4-0df30527ba99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 NetworkManager[49097]: <info>  [1769835719.4101] manager: (tap2f25b83f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Jan 31 00:01:59 np0005603435 systemd-udevd[273488]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.434 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a61c46de-87b4-44a4-b6b5-25517257531e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.437 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[a522aeb5-bbf9-413c-aafe-f4aa7d174a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 NetworkManager[49097]: <info>  [1769835719.4533] device (tap2f25b83f-b0): carrier: link connected
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.455 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[581b9b02-3700-48a4-a9b9-c92bd3114e27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.468 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd81c7e-b098-4559-a62e-8dafed70c276]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f25b83f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:19:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477385, 'reachable_time': 40755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273517, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.483 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[fb75642e-4984-4e55-baae-abc05916fae9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:1905'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477385, 'tstamp': 477385}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273518, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.497 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfb5230-490d-47ca-b44f-0e1c90183624]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f25b83f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:19:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477385, 'reachable_time': 40755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273519, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.529 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2a86df4c-e6b9-49cb-8ef4-6c67fd4961bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.585 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3bae73f0-6fbd-4adf-8b8e-65f40e4402a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.587 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f25b83f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.588 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.588 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f25b83f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:59 np0005603435 kernel: tap2f25b83f-b0: entered promiscuous mode
Jan 31 00:01:59 np0005603435 NetworkManager[49097]: <info>  [1769835719.5932] manager: (tap2f25b83f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.592 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.598 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 202 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.5 KiB/s wr, 53 op/s
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.600 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f25b83f-b0, col_values=(('external_ids', {'iface-id': '9bf21700-cf87-40d9-96a1-5af6970f25f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.601 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:59 np0005603435 ovn_controller[145670]: 2026-01-31T05:01:59Z|00271|binding|INFO|Releasing lport 9bf21700-cf87-40d9-96a1-5af6970f25f7 from this chassis (sb_readonly=0)
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.610 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.611 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.613 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.614 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e1dff9-18c9-4d71-baca-3eb8676e8d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.616 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: global
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-2f25b83f-b794-417e-88e7-d89c680f473d
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 2f25b83f-b794-417e-88e7-d89c680f473d
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 00:01:59 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:01:59.617 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'env', 'PROCESS_TAG=haproxy-2f25b83f-b794-417e-88e7-d89c680f473d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f25b83f-b794-417e-88e7-d89c680f473d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.716 239942 DEBUG nova.compute.manager [req-b926afe3-1f2d-45bc-ad71-bc50d1c8a13b req-eb324384-4de9-4ec0-9f09-063e7c8e4d42 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received event network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.717 239942 DEBUG oslo_concurrency.lockutils [req-b926afe3-1f2d-45bc-ad71-bc50d1c8a13b req-eb324384-4de9-4ec0-9f09-063e7c8e4d42 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.717 239942 DEBUG oslo_concurrency.lockutils [req-b926afe3-1f2d-45bc-ad71-bc50d1c8a13b req-eb324384-4de9-4ec0-9f09-063e7c8e4d42 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.717 239942 DEBUG oslo_concurrency.lockutils [req-b926afe3-1f2d-45bc-ad71-bc50d1c8a13b req-eb324384-4de9-4ec0-9f09-063e7c8e4d42 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:01:59 np0005603435 nova_compute[239938]: 2026-01-31 05:01:59.718 239942 DEBUG nova.compute.manager [req-b926afe3-1f2d-45bc-ad71-bc50d1c8a13b req-eb324384-4de9-4ec0-9f09-063e7c8e4d42 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Processing event network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 00:01:59 np0005603435 podman[273587]: 2026-01-31 05:01:59.954307043 +0000 UTC m=+0.047665404 container create c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 00:02:00 np0005603435 systemd[1]: Started libpod-conmon-c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60.scope.
Jan 31 00:02:00 np0005603435 podman[273587]: 2026-01-31 05:01:59.92955819 +0000 UTC m=+0.022916521 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 00:02:00 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:02:00 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70cf35fffb7dd749e697f4e85477a0cc65b0a4c8b3b0c3322176c54b00d12a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:00 np0005603435 podman[273587]: 2026-01-31 05:02:00.083773799 +0000 UTC m=+0.177132220 container init c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:02:00 np0005603435 podman[273587]: 2026-01-31 05:02:00.087821947 +0000 UTC m=+0.181180318 container start c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 00:02:00 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[273603]: [NOTICE]   (273607) : New worker (273609) forked
Jan 31 00:02:00 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[273603]: [NOTICE]   (273607) : Loading success.
Jan 31 00:02:00 np0005603435 nova_compute[239938]: 2026-01-31 05:02:00.806 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 202 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 20 KiB/s wr, 83 op/s
Jan 31 00:02:01 np0005603435 nova_compute[239938]: 2026-01-31 05:02:01.850 239942 DEBUG nova.compute.manager [req-aab6dd3b-ebf0-41b6-bbac-50d006341ca2 req-8c9fa9ec-ce56-4608-bbe2-95131138b84c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received event network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:01 np0005603435 nova_compute[239938]: 2026-01-31 05:02:01.851 239942 DEBUG oslo_concurrency.lockutils [req-aab6dd3b-ebf0-41b6-bbac-50d006341ca2 req-8c9fa9ec-ce56-4608-bbe2-95131138b84c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:01 np0005603435 nova_compute[239938]: 2026-01-31 05:02:01.851 239942 DEBUG oslo_concurrency.lockutils [req-aab6dd3b-ebf0-41b6-bbac-50d006341ca2 req-8c9fa9ec-ce56-4608-bbe2-95131138b84c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:01 np0005603435 nova_compute[239938]: 2026-01-31 05:02:01.852 239942 DEBUG oslo_concurrency.lockutils [req-aab6dd3b-ebf0-41b6-bbac-50d006341ca2 req-8c9fa9ec-ce56-4608-bbe2-95131138b84c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:01 np0005603435 nova_compute[239938]: 2026-01-31 05:02:01.852 239942 DEBUG nova.compute.manager [req-aab6dd3b-ebf0-41b6-bbac-50d006341ca2 req-8c9fa9ec-ce56-4608-bbe2-95131138b84c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] No waiting events found dispatching network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:02:01 np0005603435 nova_compute[239938]: 2026-01-31 05:02:01.852 239942 WARNING nova.compute.manager [req-aab6dd3b-ebf0-41b6-bbac-50d006341ca2 req-8c9fa9ec-ce56-4608-bbe2-95131138b84c c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received unexpected event network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.220 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835722.219978, 28de1d7d-8395-4b6a-b203-54bc32800fee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.221 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] VM Started (Lifecycle Event)#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.224 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.229 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.234 239942 INFO nova.virt.libvirt.driver [-] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Instance spawned successfully.#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.234 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.239 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.243 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.259 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.259 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.260 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.261 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.261 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.262 239942 DEBUG nova.virt.libvirt.driver [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.268 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.268 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835722.2213862, 28de1d7d-8395-4b6a-b203-54bc32800fee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.269 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] VM Paused (Lifecycle Event)#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.293 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.302 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835722.2283497, 28de1d7d-8395-4b6a-b203-54bc32800fee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.303 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] VM Resumed (Lifecycle Event)#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.322 239942 INFO nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Took 6.59 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.323 239942 DEBUG nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.324 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.331 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.369 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.393 239942 INFO nova.compute.manager [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Took 10.01 seconds to build instance.#033[00m
Jan 31 00:02:02 np0005603435 nova_compute[239938]: 2026-01-31 05:02:02.408 239942 DEBUG oslo_concurrency.lockutils [None req-1db2fd3e-7e4b-40bc-a864-1549acf78408 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:03 np0005603435 nova_compute[239938]: 2026-01-31 05:02:03.359 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 202 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 21 KiB/s wr, 57 op/s
Jan 31 00:02:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Jan 31 00:02:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Jan 31 00:02:03 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Jan 31 00:02:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:02:04 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/402547558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:02:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Jan 31 00:02:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Jan 31 00:02:04 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Jan 31 00:02:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 202 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 25 KiB/s wr, 101 op/s
Jan 31 00:02:05 np0005603435 nova_compute[239938]: 2026-01-31 05:02:05.808 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Jan 31 00:02:05 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Jan 31 00:02:05 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_05:02:06
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 00:02:06 np0005603435 nova_compute[239938]: 2026-01-31 05:02:06.848 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:06 np0005603435 NetworkManager[49097]: <info>  [1769835726.8538] manager: (patch-br-int-to-provnet-60fd0649-1231-4daa-859b-756d523d6d78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Jan 31 00:02:06 np0005603435 NetworkManager[49097]: <info>  [1769835726.8557] manager: (patch-provnet-60fd0649-1231-4daa-859b-756d523d6d78-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Jan 31 00:02:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Jan 31 00:02:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Jan 31 00:02:06 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Jan 31 00:02:06 np0005603435 nova_compute[239938]: 2026-01-31 05:02:06.897 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:06 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:06Z|00272|binding|INFO|Releasing lport 9bf21700-cf87-40d9-96a1-5af6970f25f7 from this chassis (sb_readonly=0)
Jan 31 00:02:06 np0005603435 nova_compute[239938]: 2026-01-31 05:02:06.909 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:02:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:02:07 np0005603435 nova_compute[239938]: 2026-01-31 05:02:07.110 239942 DEBUG nova.compute.manager [req-fc72f81d-a86c-4032-86e4-14d2b406f121 req-2ba3e8ba-06dd-4a62-8774-35441e8f9631 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received event network-changed-a39cd9f4-e464-424a-85e2-9a5c357fe652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:07 np0005603435 nova_compute[239938]: 2026-01-31 05:02:07.110 239942 DEBUG nova.compute.manager [req-fc72f81d-a86c-4032-86e4-14d2b406f121 req-2ba3e8ba-06dd-4a62-8774-35441e8f9631 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Refreshing instance network info cache due to event network-changed-a39cd9f4-e464-424a-85e2-9a5c357fe652. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 00:02:07 np0005603435 nova_compute[239938]: 2026-01-31 05:02:07.111 239942 DEBUG oslo_concurrency.lockutils [req-fc72f81d-a86c-4032-86e4-14d2b406f121 req-2ba3e8ba-06dd-4a62-8774-35441e8f9631 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:02:07 np0005603435 nova_compute[239938]: 2026-01-31 05:02:07.111 239942 DEBUG oslo_concurrency.lockutils [req-fc72f81d-a86c-4032-86e4-14d2b406f121 req-2ba3e8ba-06dd-4a62-8774-35441e8f9631 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:02:07 np0005603435 nova_compute[239938]: 2026-01-31 05:02:07.111 239942 DEBUG nova.network.neutron [req-fc72f81d-a86c-4032-86e4-14d2b406f121 req-2ba3e8ba-06dd-4a62-8774-35441e8f9631 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Refreshing network info cache for port a39cd9f4-e464-424a-85e2-9a5c357fe652 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 00:02:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 202 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 3.2 KiB/s wr, 244 op/s
Jan 31 00:02:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 00:02:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:02:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:02:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:02:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:02:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 00:02:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:02:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:02:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:02:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:02:08 np0005603435 nova_compute[239938]: 2026-01-31 05:02:08.362 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:08 np0005603435 nova_compute[239938]: 2026-01-31 05:02:08.506 239942 DEBUG nova.network.neutron [req-fc72f81d-a86c-4032-86e4-14d2b406f121 req-2ba3e8ba-06dd-4a62-8774-35441e8f9631 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Updated VIF entry in instance network info cache for port a39cd9f4-e464-424a-85e2-9a5c357fe652. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 00:02:08 np0005603435 nova_compute[239938]: 2026-01-31 05:02:08.507 239942 DEBUG nova.network.neutron [req-fc72f81d-a86c-4032-86e4-14d2b406f121 req-2ba3e8ba-06dd-4a62-8774-35441e8f9631 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Updating instance_info_cache with network_info: [{"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:02:08 np0005603435 nova_compute[239938]: 2026-01-31 05:02:08.649 239942 DEBUG oslo_concurrency.lockutils [req-fc72f81d-a86c-4032-86e4-14d2b406f121 req-2ba3e8ba-06dd-4a62-8774-35441e8f9631 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-28de1d7d-8395-4b6a-b203-54bc32800fee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:02:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Jan 31 00:02:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Jan 31 00:02:08 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Jan 31 00:02:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:02:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3929073677' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:02:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:02:09 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3929073677' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:02:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 202 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.7 KiB/s wr, 206 op/s
Jan 31 00:02:10 np0005603435 podman[273626]: 2026-01-31 05:02:10.118713026 +0000 UTC m=+0.085585134 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 00:02:10 np0005603435 podman[273625]: 2026-01-31 05:02:10.12931497 +0000 UTC m=+0.094870567 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:02:10 np0005603435 nova_compute[239938]: 2026-01-31 05:02:10.811 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 202 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 122 op/s
Jan 31 00:02:12 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:12Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:dc:fd 10.100.0.10
Jan 31 00:02:12 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:12Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:dc:fd 10.100.0.10
Jan 31 00:02:13 np0005603435 nova_compute[239938]: 2026-01-31 05:02:13.364 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 202 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 152 op/s
Jan 31 00:02:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 225 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1020 KiB/s rd, 3.0 MiB/s wr, 121 op/s
Jan 31 00:02:15 np0005603435 nova_compute[239938]: 2026-01-31 05:02:15.814 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:02:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042198712' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.0957411741025825e-05 of space, bias 1.0, pg target 0.0032872235223077475 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0023496656311655165 of space, bias 1.0, pg target 0.704899689349655 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.1834454286484279e-06 of space, bias 1.0, pg target 0.00035503362859452836 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669376979638351 of space, bias 1.0, pg target 0.20008130938915053 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.268499774936587e-07 of space, bias 4.0, pg target 0.0009922199729923906 quantized to 16 (current 16)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 00:02:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 638 KiB/s rd, 7.0 MiB/s wr, 133 op/s
Jan 31 00:02:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e493 do_prune osdmap full prune enabled
Jan 31 00:02:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e494 e494: 3 total, 3 up, 3 in
Jan 31 00:02:17 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e494: 3 total, 3 up, 3 in
Jan 31 00:02:18 np0005603435 nova_compute[239938]: 2026-01-31 05:02:18.416 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e494 do_prune osdmap full prune enabled
Jan 31 00:02:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e495 e495: 3 total, 3 up, 3 in
Jan 31 00:02:18 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e495: 3 total, 3 up, 3 in
Jan 31 00:02:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 792 KiB/s rd, 8.7 MiB/s wr, 155 op/s
Jan 31 00:02:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e495 do_prune osdmap full prune enabled
Jan 31 00:02:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e496 e496: 3 total, 3 up, 3 in
Jan 31 00:02:19 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e496: 3 total, 3 up, 3 in
Jan 31 00:02:20 np0005603435 nova_compute[239938]: 2026-01-31 05:02:20.867 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e496 do_prune osdmap full prune enabled
Jan 31 00:02:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e497 e497: 3 total, 3 up, 3 in
Jan 31 00:02:20 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e497: 3 total, 3 up, 3 in
Jan 31 00:02:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:02:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/81586148' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:02:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:02:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/81586148' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:02:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 4.5 KiB/s rd, 13 KiB/s wr, 6 op/s
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:02:22 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.009858) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835743009887, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 583, "num_deletes": 254, "total_data_size": 516231, "memory_usage": 526984, "flush_reason": "Manual Compaction"}
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835743014439, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 509807, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36290, "largest_seqno": 36872, "table_properties": {"data_size": 506562, "index_size": 1153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7891, "raw_average_key_size": 19, "raw_value_size": 499904, "raw_average_value_size": 1259, "num_data_blocks": 51, "num_entries": 397, "num_filter_entries": 397, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769835719, "oldest_key_time": 1769835719, "file_creation_time": 1769835743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 4618 microseconds, and 1788 cpu microseconds.
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.014476) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 509807 bytes OK
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.014493) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.015912) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.015924) EVENT_LOG_v1 {"time_micros": 1769835743015920, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.015938) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 512940, prev total WAL file size 512940, number of live WAL files 2.
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.016381) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(497KB)], [74(10MB)]
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835743016428, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12021562, "oldest_snapshot_seqno": -1}
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6714 keys, 10362329 bytes, temperature: kUnknown
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835743078253, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10362329, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10311662, "index_size": 32763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 170322, "raw_average_key_size": 25, "raw_value_size": 10185299, "raw_average_value_size": 1517, "num_data_blocks": 1302, "num_entries": 6714, "num_filter_entries": 6714, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769833065, "oldest_key_time": 0, "file_creation_time": 1769835743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d0537d2c-2cd6-4bba-ac0c-1207e35f0dbd", "db_session_id": "NJWQW6YWV3BHT45TVIYK", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.078702) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10362329 bytes
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.080460) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.7 rd, 166.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.0 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(43.9) write-amplify(20.3) OK, records in: 7236, records dropped: 522 output_compression: NoCompression
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.080488) EVENT_LOG_v1 {"time_micros": 1769835743080476, "job": 42, "event": "compaction_finished", "compaction_time_micros": 62077, "compaction_time_cpu_micros": 16427, "output_level": 6, "num_output_files": 1, "total_output_size": 10362329, "num_input_records": 7236, "num_output_records": 6714, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835743080676, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769835743082380, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.016322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.082449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.082453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.082455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.082457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: rocksdb: (Original Log Time 2026/01/31-05:02:23.082460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.158 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "28de1d7d-8395-4b6a-b203-54bc32800fee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.159 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.160 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.160 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.160 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.162 239942 INFO nova.compute.manager [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Terminating instance#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.164 239942 DEBUG nova.compute.manager [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 00:02:23 np0005603435 kernel: tapa39cd9f4-e4 (unregistering): left promiscuous mode
Jan 31 00:02:23 np0005603435 NetworkManager[49097]: <info>  [1769835743.2175] device (tapa39cd9f4-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 00:02:23 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:23Z|00273|binding|INFO|Releasing lport a39cd9f4-e464-424a-85e2-9a5c357fe652 from this chassis (sb_readonly=0)
Jan 31 00:02:23 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:23Z|00274|binding|INFO|Setting lport a39cd9f4-e464-424a-85e2-9a5c357fe652 down in Southbound
Jan 31 00:02:23 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:23Z|00275|binding|INFO|Removing iface tapa39cd9f4-e4 ovn-installed in OVS
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.280 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.286 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.289 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:dc:fd 10.100.0.10'], port_security=['fa:16:3e:23:dc:fd 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '28de1d7d-8395-4b6a-b203-54bc32800fee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f25b83f-b794-417e-88e7-d89c680f473d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48935f8745744c4ba5400c13f80e0379', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b59f016-9fba-4b72-aa35-0db4493e20dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94c57d33-0e3a-4b86-87cd-ae1ca9bb064d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=a39cd9f4-e464-424a-85e2-9a5c357fe652) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.291 156017 INFO neutron.agent.ovn.metadata.agent [-] Port a39cd9f4-e464-424a-85e2-9a5c357fe652 in datapath 2f25b83f-b794-417e-88e7-d89c680f473d unbound from our chassis#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.293 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f25b83f-b794-417e-88e7-d89c680f473d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.294 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[19ff27e5-a08a-4fec-8ec1-1fd933e7f58a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.296 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d namespace which is not needed anymore#033[00m
Jan 31 00:02:23 np0005603435 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Jan 31 00:02:23 np0005603435 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 14.580s CPU time.
Jan 31 00:02:23 np0005603435 systemd-machined[208030]: Machine qemu-28-instance-0000001c terminated.
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.403 239942 INFO nova.virt.libvirt.driver [-] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Instance destroyed successfully.#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.405 239942 DEBUG nova.objects.instance [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'resources' on Instance uuid 28de1d7d-8395-4b6a-b203-54bc32800fee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:02:23 np0005603435 podman[273831]: 2026-01-31 05:02:23.416840784 +0000 UTC m=+0.053579976 container create 1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ardinghelli, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.416 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.421 239942 DEBUG nova.virt.libvirt.vif [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T05:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-692053603',display_name='tempest-TestEncryptedCinderVolumes-server-692053603',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-692053603',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmr0MUFNJjz18mvNHr0kofSqXL+MOUCKmtJGcrQVuZqzDEVyxUUFebchvjqsqS9tyThgYSCkXKWLzTW0ED0WOyTQNQBDzi5dd8NYQAYU+nK8F6As1qr5NixmuIDexDl8Q==',key_name='tempest-TestEncryptedCinderVolumes-1017268198',keypairs=<?>,launch_index=0,launched_at=2026-01-31T05:02:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-gi4xc2ud',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T05:02:02Z,user_data=None,user_id='6784d92c92b24526a302a1a74a813c76',uuid=28de1d7d-8395-4b6a-b203-54bc32800fee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.422 239942 DEBUG nova.network.os_vif_util [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "address": "fa:16:3e:23:dc:fd", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa39cd9f4-e4", "ovs_interfaceid": "a39cd9f4-e464-424a-85e2-9a5c357fe652", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.424 239942 DEBUG nova.network.os_vif_util [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:dc:fd,bridge_name='br-int',has_traffic_filtering=True,id=a39cd9f4-e464-424a-85e2-9a5c357fe652,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa39cd9f4-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.425 239942 DEBUG os_vif [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:dc:fd,bridge_name='br-int',has_traffic_filtering=True,id=a39cd9f4-e464-424a-85e2-9a5c357fe652,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa39cd9f4-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.428 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.428 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39cd9f4-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.431 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.433 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.436 239942 INFO os_vif [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:dc:fd,bridge_name='br-int',has_traffic_filtering=True,id=a39cd9f4-e464-424a-85e2-9a5c357fe652,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa39cd9f4-e4')#033[00m
Jan 31 00:02:23 np0005603435 systemd[1]: Started libpod-conmon-1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9.scope.
Jan 31 00:02:23 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[273603]: [NOTICE]   (273607) : haproxy version is 2.8.14-c23fe91
Jan 31 00:02:23 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[273603]: [NOTICE]   (273607) : path to executable is /usr/sbin/haproxy
Jan 31 00:02:23 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[273603]: [WARNING]  (273607) : Exiting Master process...
Jan 31 00:02:23 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[273603]: [WARNING]  (273607) : Exiting Master process...
Jan 31 00:02:23 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[273603]: [ALERT]    (273607) : Current worker (273609) exited with code 143 (Terminated)
Jan 31 00:02:23 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[273603]: [WARNING]  (273607) : All workers exited. Exiting... (0)
Jan 31 00:02:23 np0005603435 systemd[1]: libpod-c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60.scope: Deactivated successfully.
Jan 31 00:02:23 np0005603435 conmon[273603]: conmon c5f96299245dc987b1a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60.scope/container/memory.events
Jan 31 00:02:23 np0005603435 podman[273862]: 2026-01-31 05:02:23.468533654 +0000 UTC m=+0.052208613 container died c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 00:02:23 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:02:23 np0005603435 podman[273831]: 2026-01-31 05:02:23.395278197 +0000 UTC m=+0.032017399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:02:23 np0005603435 podman[273831]: 2026-01-31 05:02:23.491539036 +0000 UTC m=+0.128278228 container init 1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ardinghelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 00:02:23 np0005603435 podman[273831]: 2026-01-31 05:02:23.498070623 +0000 UTC m=+0.134809815 container start 1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 00:02:23 np0005603435 amazing_ardinghelli[273895]: 167 167
Jan 31 00:02:23 np0005603435 systemd[1]: libpod-1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9.scope: Deactivated successfully.
Jan 31 00:02:23 np0005603435 conmon[273895]: conmon 1f698ced4e9dae7d5396 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9.scope/container/memory.events
Jan 31 00:02:23 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60-userdata-shm.mount: Deactivated successfully.
Jan 31 00:02:23 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f70cf35fffb7dd749e697f4e85477a0cc65b0a4c8b3b0c3322176c54b00d12a9-merged.mount: Deactivated successfully.
Jan 31 00:02:23 np0005603435 podman[273831]: 2026-01-31 05:02:23.515253135 +0000 UTC m=+0.151992317 container attach 1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ardinghelli, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:02:23 np0005603435 podman[273862]: 2026-01-31 05:02:23.528892622 +0000 UTC m=+0.112567621 container cleanup c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 00:02:23 np0005603435 podman[273831]: 2026-01-31 05:02:23.535389688 +0000 UTC m=+0.172128880 container died 1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:02:23 np0005603435 systemd[1]: libpod-conmon-c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60.scope: Deactivated successfully.
Jan 31 00:02:23 np0005603435 systemd[1]: var-lib-containers-storage-overlay-95a4538188600058f53fa34a167c554374756ebe58a98013b678dc37e4f5a393-merged.mount: Deactivated successfully.
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.575 239942 DEBUG nova.compute.manager [req-a253e98e-1715-4741-90fe-7139c8cb40e2 req-c7580552-2cdc-41d3-8a6b-b64c6bb319a0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received event network-vif-unplugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.576 239942 DEBUG oslo_concurrency.lockutils [req-a253e98e-1715-4741-90fe-7139c8cb40e2 req-c7580552-2cdc-41d3-8a6b-b64c6bb319a0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.576 239942 DEBUG oslo_concurrency.lockutils [req-a253e98e-1715-4741-90fe-7139c8cb40e2 req-c7580552-2cdc-41d3-8a6b-b64c6bb319a0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.576 239942 DEBUG oslo_concurrency.lockutils [req-a253e98e-1715-4741-90fe-7139c8cb40e2 req-c7580552-2cdc-41d3-8a6b-b64c6bb319a0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.577 239942 DEBUG nova.compute.manager [req-a253e98e-1715-4741-90fe-7139c8cb40e2 req-c7580552-2cdc-41d3-8a6b-b64c6bb319a0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] No waiting events found dispatching network-vif-unplugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.577 239942 DEBUG nova.compute.manager [req-a253e98e-1715-4741-90fe-7139c8cb40e2 req-c7580552-2cdc-41d3-8a6b-b64c6bb319a0 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received event network-vif-unplugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 00:02:23 np0005603435 podman[273916]: 2026-01-31 05:02:23.593136894 +0000 UTC m=+0.076233480 container remove 1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 00:02:23 np0005603435 systemd[1]: libpod-conmon-1f698ced4e9dae7d5396d1aa5129385f5950632dff236bd62d9cf25f494e0ca9.scope: Deactivated successfully.
Jan 31 00:02:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 14 KiB/s wr, 103 op/s
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.616 239942 INFO nova.virt.libvirt.driver [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Deleting instance files /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee_del#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.616 239942 INFO nova.virt.libvirt.driver [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Deletion of /var/lib/nova/instances/28de1d7d-8395-4b6a-b203-54bc32800fee_del complete#033[00m
Jan 31 00:02:23 np0005603435 podman[273928]: 2026-01-31 05:02:23.622722333 +0000 UTC m=+0.074591170 container remove c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.627 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf0ca7c-5f68-4713-9a4f-24095843c090]: (4, ('Sat Jan 31 05:02:23 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d (c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60)\nc5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60\nSat Jan 31 05:02:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d (c5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60)\nc5f96299245dc987b1a9f81a0605bdd0c0f24075f4f65c3ca30077623a7f6e60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.629 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[06e404e7-8a55-4562-8080-b22094070382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.629 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f25b83f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.631 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:23 np0005603435 kernel: tap2f25b83f-b0: left promiscuous mode
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.638 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.641 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d454d0-e32f-428e-9e19-988921c5c6d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.655 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[af68f389-7438-4078-b38a-48285f25b725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.657 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[75a3f293-9f17-49bc-b064-11d176961d49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.669 239942 INFO nova.compute.manager [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Took 0.51 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.670 239942 DEBUG oslo.service.loopingcall [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.670 239942 DEBUG nova.compute.manager [-] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.670 239942 DEBUG nova.network.neutron [-] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.670 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[a12ce38c-5760-4cf8-b58e-6b25b849b14c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477380, 'reachable_time': 24503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273952, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.672 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 00:02:23 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:23.672 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[e960bbc2-3b6e-442a-a33a-01016aa2e140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:23 np0005603435 systemd[1]: run-netns-ovnmeta\x2d2f25b83f\x2db794\x2d417e\x2d88e7\x2dd89c680f473d.mount: Deactivated successfully.
Jan 31 00:02:23 np0005603435 podman[273958]: 2026-01-31 05:02:23.73303308 +0000 UTC m=+0.032831579 container create c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 00:02:23 np0005603435 systemd[1]: Started libpod-conmon-c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2.scope.
Jan 31 00:02:23 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:02:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b486a32fa00df15ccd76dff2714f7852c9df3cd6f41108df358bd68f3ae561ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b486a32fa00df15ccd76dff2714f7852c9df3cd6f41108df358bd68f3ae561ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b486a32fa00df15ccd76dff2714f7852c9df3cd6f41108df358bd68f3ae561ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b486a32fa00df15ccd76dff2714f7852c9df3cd6f41108df358bd68f3ae561ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:23 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b486a32fa00df15ccd76dff2714f7852c9df3cd6f41108df358bd68f3ae561ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:23 np0005603435 podman[273958]: 2026-01-31 05:02:23.719952986 +0000 UTC m=+0.019751505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:02:23 np0005603435 podman[273958]: 2026-01-31 05:02:23.819090064 +0000 UTC m=+0.118888643 container init c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 00:02:23 np0005603435 podman[273958]: 2026-01-31 05:02:23.826049881 +0000 UTC m=+0.125848410 container start c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:02:23 np0005603435 podman[273958]: 2026-01-31 05:02:23.833135081 +0000 UTC m=+0.132933640 container attach c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:02:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:23 np0005603435 nova_compute[239938]: 2026-01-31 05:02:23.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:24 np0005603435 suspicious_lichterman[273974]: --> passed data devices: 0 physical, 3 LVM
Jan 31 00:02:24 np0005603435 suspicious_lichterman[273974]: --> All data devices are unavailable
Jan 31 00:02:24 np0005603435 systemd[1]: libpod-c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2.scope: Deactivated successfully.
Jan 31 00:02:24 np0005603435 conmon[273974]: conmon c5b4b3ed3dd246d8e2a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2.scope/container/memory.events
Jan 31 00:02:24 np0005603435 podman[273958]: 2026-01-31 05:02:24.270578956 +0000 UTC m=+0.570377515 container died c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 00:02:24 np0005603435 podman[273958]: 2026-01-31 05:02:24.311744003 +0000 UTC m=+0.611542542 container remove c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 00:02:24 np0005603435 systemd[1]: libpod-conmon-c5b4b3ed3dd246d8e2a5714cfacf630ee21ee9c5831bc240d65855072dfa30d2.scope: Deactivated successfully.
Jan 31 00:02:24 np0005603435 podman[274069]: 2026-01-31 05:02:24.787857805 +0000 UTC m=+0.055220615 container create e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 00:02:24 np0005603435 systemd[1]: Started libpod-conmon-e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6.scope.
Jan 31 00:02:24 np0005603435 podman[274069]: 2026-01-31 05:02:24.761269627 +0000 UTC m=+0.028632487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:02:24 np0005603435 nova_compute[239938]: 2026-01-31 05:02:24.895 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:24 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:02:24 np0005603435 podman[274069]: 2026-01-31 05:02:24.912670379 +0000 UTC m=+0.180033239 container init e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_rhodes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 00:02:24 np0005603435 nova_compute[239938]: 2026-01-31 05:02:24.914 239942 DEBUG nova.network.neutron [-] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:02:24 np0005603435 podman[274069]: 2026-01-31 05:02:24.921516531 +0000 UTC m=+0.188879341 container start e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_rhodes, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 00:02:24 np0005603435 podman[274069]: 2026-01-31 05:02:24.925160518 +0000 UTC m=+0.192523378 container attach e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 00:02:24 np0005603435 gifted_rhodes[274085]: 167 167
Jan 31 00:02:24 np0005603435 systemd[1]: libpod-e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6.scope: Deactivated successfully.
Jan 31 00:02:24 np0005603435 podman[274069]: 2026-01-31 05:02:24.92691646 +0000 UTC m=+0.194279260 container died e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_rhodes, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:02:24 np0005603435 nova_compute[239938]: 2026-01-31 05:02:24.937 239942 INFO nova.compute.manager [-] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Took 1.27 seconds to deallocate network for instance.#033[00m
Jan 31 00:02:24 np0005603435 systemd[1]: var-lib-containers-storage-overlay-52a9ef5509f12947d0686966fb94e32eeafda4aa9a4a3db00ae81d69a6fcbb38-merged.mount: Deactivated successfully.
Jan 31 00:02:24 np0005603435 podman[274069]: 2026-01-31 05:02:24.973663582 +0000 UTC m=+0.241026362 container remove e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:02:24 np0005603435 systemd[1]: libpod-conmon-e21f5d2868bcc1976dd9c4dc9742da636af42ee00df1d767cef8e1c14a26e6e6.scope: Deactivated successfully.
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.082 239942 DEBUG nova.compute.manager [req-5ffe2025-48d4-4ad9-9f79-c6bab5634224 req-92860452-e1ea-4c24-b468-d15742e93172 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received event network-vif-deleted-a39cd9f4-e464-424a-85e2-9a5c357fe652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:25 np0005603435 podman[274109]: 2026-01-31 05:02:25.157704257 +0000 UTC m=+0.056847095 container create 8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_turing, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:02:25 np0005603435 systemd[1]: Started libpod-conmon-8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681.scope.
Jan 31 00:02:25 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:02:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a31c390015e0cf3cb3bb6c749dd99c1fc5dfe82fb53b2c207e7f9b76bb2e79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a31c390015e0cf3cb3bb6c749dd99c1fc5dfe82fb53b2c207e7f9b76bb2e79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a31c390015e0cf3cb3bb6c749dd99c1fc5dfe82fb53b2c207e7f9b76bb2e79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:25 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a31c390015e0cf3cb3bb6c749dd99c1fc5dfe82fb53b2c207e7f9b76bb2e79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:25 np0005603435 podman[274109]: 2026-01-31 05:02:25.137533393 +0000 UTC m=+0.036676321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.230 239942 INFO nova.compute.manager [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Took 0.29 seconds to detach 1 volumes for instance.#033[00m
Jan 31 00:02:25 np0005603435 podman[274109]: 2026-01-31 05:02:25.259446138 +0000 UTC m=+0.158588996 container init 8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 00:02:25 np0005603435 podman[274109]: 2026-01-31 05:02:25.271877306 +0000 UTC m=+0.171020194 container start 8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.273 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.274 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:25 np0005603435 podman[274109]: 2026-01-31 05:02:25.27703145 +0000 UTC m=+0.176174358 container attach 8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.386 239942 DEBUG oslo_concurrency.processutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]: {
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:    "0": [
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:        {
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "devices": [
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "/dev/loop3"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            ],
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_name": "ceph_lv0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_size": "21470642176",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "name": "ceph_lv0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "tags": {
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cluster_name": "ceph",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.crush_device_class": "",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.encrypted": "0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.objectstore": "bluestore",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osd_id": "0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.type": "block",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.vdo": "0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.with_tpm": "0"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            },
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "type": "block",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "vg_name": "ceph_vg0"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:        }
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:    ],
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:    "1": [
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:        {
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "devices": [
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "/dev/loop4"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            ],
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_name": "ceph_lv1",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_size": "21470642176",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "name": "ceph_lv1",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "tags": {
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cluster_name": "ceph",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.crush_device_class": "",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.encrypted": "0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.objectstore": "bluestore",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osd_id": "1",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.type": "block",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.vdo": "0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.with_tpm": "0"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            },
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "type": "block",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "vg_name": "ceph_vg1"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:        }
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:    ],
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:    "2": [
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:        {
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "devices": [
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "/dev/loop5"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            ],
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_name": "ceph_lv2",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_size": "21470642176",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "name": "ceph_lv2",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "tags": {
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.cluster_name": "ceph",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.crush_device_class": "",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.encrypted": "0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.objectstore": "bluestore",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osd_id": "2",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.type": "block",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.vdo": "0",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:                "ceph.with_tpm": "0"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            },
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "type": "block",
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:            "vg_name": "ceph_vg2"
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:        }
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]:    ]
Jan 31 00:02:25 np0005603435 mystifying_turing[274127]: }
Jan 31 00:02:25 np0005603435 systemd[1]: libpod-8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681.scope: Deactivated successfully.
Jan 31 00:02:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 23 KiB/s wr, 107 op/s
Jan 31 00:02:25 np0005603435 podman[274156]: 2026-01-31 05:02:25.631893063 +0000 UTC m=+0.027475330 container died 8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_turing, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 00:02:25 np0005603435 systemd[1]: var-lib-containers-storage-overlay-38a31c390015e0cf3cb3bb6c749dd99c1fc5dfe82fb53b2c207e7f9b76bb2e79-merged.mount: Deactivated successfully.
Jan 31 00:02:25 np0005603435 podman[274156]: 2026-01-31 05:02:25.68802541 +0000 UTC m=+0.083607667 container remove 8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 00:02:25 np0005603435 systemd[1]: libpod-conmon-8dbc68d21c5c437c64e78ea8d33076c126073266c60c51dea04b38beea853681.scope: Deactivated successfully.
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.869 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.885 239942 DEBUG nova.compute.manager [req-74305108-8605-4daa-9f17-361ad022a771 req-c40b9e3d-1c36-425f-8788-5d507271a465 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received event network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.886 239942 DEBUG oslo_concurrency.lockutils [req-74305108-8605-4daa-9f17-361ad022a771 req-c40b9e3d-1c36-425f-8788-5d507271a465 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.886 239942 DEBUG oslo_concurrency.lockutils [req-74305108-8605-4daa-9f17-361ad022a771 req-c40b9e3d-1c36-425f-8788-5d507271a465 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.886 239942 DEBUG oslo_concurrency.lockutils [req-74305108-8605-4daa-9f17-361ad022a771 req-c40b9e3d-1c36-425f-8788-5d507271a465 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.886 239942 DEBUG nova.compute.manager [req-74305108-8605-4daa-9f17-361ad022a771 req-c40b9e3d-1c36-425f-8788-5d507271a465 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] No waiting events found dispatching network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.886 239942 WARNING nova.compute.manager [req-74305108-8605-4daa-9f17-361ad022a771 req-c40b9e3d-1c36-425f-8788-5d507271a465 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Received unexpected event network-vif-plugged-a39cd9f4-e464-424a-85e2-9a5c357fe652 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 00:02:25 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:02:25 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602561270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.957 239942 DEBUG oslo_concurrency.processutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.964 239942 DEBUG nova.compute.provider_tree [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:02:25 np0005603435 nova_compute[239938]: 2026-01-31 05:02:25.982 239942 DEBUG nova.scheduler.client.report [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:02:26 np0005603435 nova_compute[239938]: 2026-01-31 05:02:26.002 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:26 np0005603435 podman[274235]: 2026-01-31 05:02:26.108299772 +0000 UTC m=+0.052819038 container create b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 00:02:26 np0005603435 systemd[1]: Started libpod-conmon-b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2.scope.
Jan 31 00:02:26 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:02:26 np0005603435 podman[274235]: 2026-01-31 05:02:26.089391468 +0000 UTC m=+0.033910764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:02:26 np0005603435 nova_compute[239938]: 2026-01-31 05:02:26.188 239942 INFO nova.scheduler.client.report [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Deleted allocations for instance 28de1d7d-8395-4b6a-b203-54bc32800fee#033[00m
Jan 31 00:02:26 np0005603435 podman[274235]: 2026-01-31 05:02:26.197514992 +0000 UTC m=+0.142034338 container init b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hertz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 00:02:26 np0005603435 podman[274235]: 2026-01-31 05:02:26.206025786 +0000 UTC m=+0.150545072 container start b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hertz, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:02:26 np0005603435 podman[274235]: 2026-01-31 05:02:26.210162556 +0000 UTC m=+0.154681892 container attach b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:02:26 np0005603435 serene_hertz[274252]: 167 167
Jan 31 00:02:26 np0005603435 systemd[1]: libpod-b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2.scope: Deactivated successfully.
Jan 31 00:02:26 np0005603435 podman[274235]: 2026-01-31 05:02:26.212674586 +0000 UTC m=+0.157193942 container died b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 00:02:26 np0005603435 systemd[1]: var-lib-containers-storage-overlay-19d1f542e20a5dc4ef98e637a2bd3b5d2e0a34e0b274fac6f3c75114d2061be8-merged.mount: Deactivated successfully.
Jan 31 00:02:26 np0005603435 podman[274235]: 2026-01-31 05:02:26.259768756 +0000 UTC m=+0.204288052 container remove b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hertz, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:02:26 np0005603435 systemd[1]: libpod-conmon-b8b7454887f3f9d88ec297269bbbdd968811e10b8e84bf482f7ff9e5b148c5e2.scope: Deactivated successfully.
Jan 31 00:02:26 np0005603435 nova_compute[239938]: 2026-01-31 05:02:26.310 239942 DEBUG oslo_concurrency.lockutils [None req-1eee6242-49ef-4562-bc63-beb1f7fbcbb2 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "28de1d7d-8395-4b6a-b203-54bc32800fee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:26 np0005603435 podman[274275]: 2026-01-31 05:02:26.410774648 +0000 UTC m=+0.033425272 container create 5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:02:26 np0005603435 systemd[1]: Started libpod-conmon-5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316.scope.
Jan 31 00:02:26 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:02:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691dd714f195cc7bbd0a8c278a623aa19b170f1767d80b7bb201cdeef7fa714d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691dd714f195cc7bbd0a8c278a623aa19b170f1767d80b7bb201cdeef7fa714d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691dd714f195cc7bbd0a8c278a623aa19b170f1767d80b7bb201cdeef7fa714d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:26 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691dd714f195cc7bbd0a8c278a623aa19b170f1767d80b7bb201cdeef7fa714d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:26 np0005603435 podman[274275]: 2026-01-31 05:02:26.493116994 +0000 UTC m=+0.115767668 container init 5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:02:26 np0005603435 podman[274275]: 2026-01-31 05:02:26.397146932 +0000 UTC m=+0.019797576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:02:26 np0005603435 podman[274275]: 2026-01-31 05:02:26.500751287 +0000 UTC m=+0.123401961 container start 5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 00:02:26 np0005603435 podman[274275]: 2026-01-31 05:02:26.505388998 +0000 UTC m=+0.128039662 container attach 5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_austin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:02:26 np0005603435 nova_compute[239938]: 2026-01-31 05:02:26.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:27 np0005603435 lvm[274370]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 00:02:27 np0005603435 lvm[274373]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 00:02:27 np0005603435 lvm[274372]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 00:02:27 np0005603435 lvm[274373]: VG ceph_vg2 finished
Jan 31 00:02:27 np0005603435 lvm[274372]: VG ceph_vg1 finished
Jan 31 00:02:27 np0005603435 lvm[274370]: VG ceph_vg0 finished
Jan 31 00:02:27 np0005603435 kind_austin[274292]: {}
Jan 31 00:02:27 np0005603435 systemd[1]: libpod-5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316.scope: Deactivated successfully.
Jan 31 00:02:27 np0005603435 podman[274275]: 2026-01-31 05:02:27.238130417 +0000 UTC m=+0.860781081 container died 5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_austin, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 00:02:27 np0005603435 systemd[1]: libpod-5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316.scope: Consumed 1.066s CPU time.
Jan 31 00:02:27 np0005603435 systemd[1]: var-lib-containers-storage-overlay-691dd714f195cc7bbd0a8c278a623aa19b170f1767d80b7bb201cdeef7fa714d-merged.mount: Deactivated successfully.
Jan 31 00:02:27 np0005603435 podman[274275]: 2026-01-31 05:02:27.286346584 +0000 UTC m=+0.908997238 container remove 5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 00:02:27 np0005603435 systemd[1]: libpod-conmon-5dc51eb4ac305709dcd18c458741e6690ff29f03201592893afb04e7d481e316.scope: Deactivated successfully.
Jan 31 00:02:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 00:02:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:02:27 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 00:02:27 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:02:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 30 KiB/s wr, 97 op/s
Jan 31 00:02:27 np0005603435 nova_compute[239938]: 2026-01-31 05:02:27.882 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:27 np0005603435 nova_compute[239938]: 2026-01-31 05:02:27.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e497 do_prune osdmap full prune enabled
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e498 e498: 3 total, 3 up, 3 in
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e498: 3 total, 3 up, 3 in
Jan 31 00:02:28 np0005603435 nova_compute[239938]: 2026-01-31 05:02:28.463 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e498 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e498 do_prune osdmap full prune enabled
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e499 e499: 3 total, 3 up, 3 in
Jan 31 00:02:28 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e499: 3 total, 3 up, 3 in
Jan 31 00:02:28 np0005603435 nova_compute[239938]: 2026-01-31 05:02:28.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:28 np0005603435 nova_compute[239938]: 2026-01-31 05:02:28.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 00:02:28 np0005603435 nova_compute[239938]: 2026-01-31 05:02:28.888 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 00:02:28 np0005603435 nova_compute[239938]: 2026-01-31 05:02:28.905 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 00:02:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:02:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2026844914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:02:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:02:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2867999218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:02:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 23 KiB/s wr, 94 op/s
Jan 31 00:02:29 np0005603435 nova_compute[239938]: 2026-01-31 05:02:29.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:29 np0005603435 nova_compute[239938]: 2026-01-31 05:02:29.910 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:29 np0005603435 nova_compute[239938]: 2026-01-31 05:02:29.911 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:29 np0005603435 nova_compute[239938]: 2026-01-31 05:02:29.911 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:29 np0005603435 nova_compute[239938]: 2026-01-31 05:02:29.911 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 00:02:29 np0005603435 nova_compute[239938]: 2026-01-31 05:02:29.912 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e499 do_prune osdmap full prune enabled
Jan 31 00:02:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e500 e500: 3 total, 3 up, 3 in
Jan 31 00:02:30 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e500: 3 total, 3 up, 3 in
Jan 31 00:02:30 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:02:30 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674091787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:02:30 np0005603435 nova_compute[239938]: 2026-01-31 05:02:30.478 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:30 np0005603435 nova_compute[239938]: 2026-01-31 05:02:30.667 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:02:30 np0005603435 nova_compute[239938]: 2026-01-31 05:02:30.670 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4257MB free_disk=59.9877619529143GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 00:02:30 np0005603435 nova_compute[239938]: 2026-01-31 05:02:30.672 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:30 np0005603435 nova_compute[239938]: 2026-01-31 05:02:30.672 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:30 np0005603435 nova_compute[239938]: 2026-01-31 05:02:30.913 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:30 np0005603435 nova_compute[239938]: 2026-01-31 05:02:30.995 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 00:02:30 np0005603435 nova_compute[239938]: 2026-01-31 05:02:30.996 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 00:02:31 np0005603435 nova_compute[239938]: 2026-01-31 05:02:31.099 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e500 do_prune osdmap full prune enabled
Jan 31 00:02:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e501 e501: 3 total, 3 up, 3 in
Jan 31 00:02:31 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e501: 3 total, 3 up, 3 in
Jan 31 00:02:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.0 KiB/s wr, 48 op/s
Jan 31 00:02:31 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:02:31 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634960634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:02:31 np0005603435 nova_compute[239938]: 2026-01-31 05:02:31.653 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:31 np0005603435 nova_compute[239938]: 2026-01-31 05:02:31.660 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:02:31 np0005603435 nova_compute[239938]: 2026-01-31 05:02:31.729 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:02:31 np0005603435 nova_compute[239938]: 2026-01-31 05:02:31.765 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 00:02:31 np0005603435 nova_compute[239938]: 2026-01-31 05:02:31.766 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.087 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.088 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:02:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2349189257' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:02:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:02:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2349189257' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.111 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.199 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.200 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.209 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.209 239942 INFO nova.compute.claims [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.318 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.523 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 4.6 KiB/s wr, 81 op/s
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.767 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.768 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.768 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 00:02:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:02:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584932132' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:02:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e501 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.855 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.860 239942 DEBUG nova.compute.provider_tree [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.885 239942 DEBUG nova.scheduler.client.report [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.908 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.909 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.969 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.969 239942 DEBUG nova.network.neutron [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 00:02:33 np0005603435 nova_compute[239938]: 2026-01-31 05:02:33.993 239942 INFO nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.011 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.073 239942 INFO nova.virt.block_device [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Booting with volume acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab at /dev/vda#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.205 239942 DEBUG os_brick.utils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.207 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.220 252212 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.220 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[9df1fab6-4bed-4a99-a504-6fb4ef3102a5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.222 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.231 252212 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.231 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee81b16-38e2-470e-91aa-a0508281b4c6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:dac440bcec9b', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.233 252212 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.241 252212 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.242 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8422ca-8da3-48a2-8929-f2f5c5bdc315]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.244 252212 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8c023a-e37a-4429-af2e-e9ed0fb68ece]: (4, 'e56e1981-badb-4c56-a12d-c458e4e6bca8') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.244 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.266 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.269 239942 DEBUG os_brick.initiator.connectors.lightos [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.270 239942 DEBUG os_brick.initiator.connectors.lightos [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.270 239942 DEBUG os_brick.initiator.connectors.lightos [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.271 239942 DEBUG os_brick.utils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:dac440bcec9b', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': 'e56e1981-badb-4c56-a12d-c458e4e6bca8', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.271 239942 DEBUG nova.virt.block_device [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Updating existing volume attachment record: 324641eb-f90d-41e2-9ab4-aac1bbad2c4b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 00:02:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:02:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/324822453' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.535 239942 DEBUG nova.policy [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6784d92c92b24526a302a1a74a813c76', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48935f8745744c4ba5400c13f80e0379', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 00:02:34 np0005603435 nova_compute[239938]: 2026-01-31 05:02:34.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:02:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:02:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/808955336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.292 239942 DEBUG nova.network.neutron [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Successfully created port: ba882408-c3f6-4623-97a6-4d87a99fe278 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 00:02:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e501 do_prune osdmap full prune enabled
Jan 31 00:02:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e502 e502: 3 total, 3 up, 3 in
Jan 31 00:02:35 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e502: 3 total, 3 up, 3 in
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.480 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.483 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.483 239942 INFO nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Creating image(s)#033[00m
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.484 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.485 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Ensure instance console log exists: /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.485 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.486 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.486 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 5.0 KiB/s wr, 79 op/s
Jan 31 00:02:35 np0005603435 nova_compute[239938]: 2026-01-31 05:02:35.916 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e502 do_prune osdmap full prune enabled
Jan 31 00:02:36 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e503 e503: 3 total, 3 up, 3 in
Jan 31 00:02:36 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e503: 3 total, 3 up, 3 in
Jan 31 00:02:36 np0005603435 nova_compute[239938]: 2026-01-31 05:02:36.520 239942 DEBUG nova.network.neutron [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Successfully updated port: ba882408-c3f6-4623-97a6-4d87a99fe278 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 00:02:36 np0005603435 nova_compute[239938]: 2026-01-31 05:02:36.540 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:02:36 np0005603435 nova_compute[239938]: 2026-01-31 05:02:36.540 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquired lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:02:36 np0005603435 nova_compute[239938]: 2026-01-31 05:02:36.540 239942 DEBUG nova.network.neutron [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 00:02:36 np0005603435 nova_compute[239938]: 2026-01-31 05:02:36.600 239942 DEBUG nova.compute.manager [req-9c4fb996-9c26-4b20-b239-204c8c456bf1 req-7789ea8b-37f0-434e-9281-e1f9c941f149 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received event network-changed-ba882408-c3f6-4623-97a6-4d87a99fe278 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:36 np0005603435 nova_compute[239938]: 2026-01-31 05:02:36.600 239942 DEBUG nova.compute.manager [req-9c4fb996-9c26-4b20-b239-204c8c456bf1 req-7789ea8b-37f0-434e-9281-e1f9c941f149 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Refreshing instance network info cache due to event network-changed-ba882408-c3f6-4623-97a6-4d87a99fe278. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 00:02:36 np0005603435 nova_compute[239938]: 2026-01-31 05:02:36.601 239942 DEBUG oslo_concurrency.lockutils [req-9c4fb996-9c26-4b20-b239-204c8c456bf1 req-7789ea8b-37f0-434e-9281-e1f9c941f149 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:02:36 np0005603435 nova_compute[239938]: 2026-01-31 05:02:36.668 239942 DEBUG nova.network.neutron [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 00:02:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:02:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:02:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:02:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:02:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:02:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.431 239942 DEBUG nova.network.neutron [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Updating instance_info_cache with network_info: [{"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.451 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Releasing lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.452 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Instance network_info: |[{"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.452 239942 DEBUG oslo_concurrency.lockutils [req-9c4fb996-9c26-4b20-b239-204c8c456bf1 req-7789ea8b-37f0-434e-9281-e1f9c941f149 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.452 239942 DEBUG nova.network.neutron [req-9c4fb996-9c26-4b20-b239-204c8c456bf1 req-7789ea8b-37f0-434e-9281-e1f9c941f149 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Refreshing network info cache for port ba882408-c3f6-4623-97a6-4d87a99fe278 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.456 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Start _get_guest_xml network_info=[{"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'attachment_id': '324641eb-f90d-41e2-9ab4-aac1bbad2c4b', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e', 'attached_at': '', 'detached_at': '', 'volume_id': 'acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab', 'serial': 'acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.461 239942 WARNING nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.466 239942 DEBUG nova.virt.libvirt.host [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.467 239942 DEBUG nova.virt.libvirt.host [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.477 239942 DEBUG nova.virt.libvirt.host [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.478 239942 DEBUG nova.virt.libvirt.host [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.479 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.479 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T04:42:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8edf2138-3e99-457c-aed0-6651b812b359',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.479 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.479 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.480 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.480 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.480 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.480 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.480 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.481 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.481 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.481 239942 DEBUG nova.virt.hardware [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.511 239942 DEBUG nova.storage.rbd_utils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:02:37 np0005603435 nova_compute[239938]: 2026-01-31 05:02:37.516 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 5.1 KiB/s wr, 73 op/s
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869920696' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.069 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.195 239942 DEBUG os_brick.encryptors [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Using volume encryption metadata '{'encryption_key_id': '0927b934-aadc-4790-9165-9db9cfb0b0d8', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e', 'attached_at': '', 'detached_at': '', 'volume_id': 'acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.197 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.213 239942 DEBUG barbicanclient.v1.secrets [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.213 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.241 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.242 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.262 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.263 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.281 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.282 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.305 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.306 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.327 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.327 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.347 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.348 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.368 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.369 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.390 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.390 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.401 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835743.4004326, 28de1d7d-8395-4b6a-b203-54bc32800fee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.402 239942 INFO nova.compute.manager [-] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] VM Stopped (Lifecycle Event)#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.411 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.411 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.424 239942 DEBUG nova.compute.manager [None req-c9f618d7-edb2-40d2-90db-9692ef768865 - - - - - -] [instance: 28de1d7d-8395-4b6a-b203-54bc32800fee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.432 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.433 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.457 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.458 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e503 do_prune osdmap full prune enabled
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.481 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.482 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e504 e504: 3 total, 3 up, 3 in
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e504: 3 total, 3 up, 3 in
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.505 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.506 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.526 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.559 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.560 239942 INFO barbicanclient.base [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Calculated Secrets uuid ref: secrets/0927b934-aadc-4790-9165-9db9cfb0b0d8#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.591 239942 DEBUG barbicanclient.client [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.592 239942 DEBUG nova.virt.libvirt.host [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <usage type="volume">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <volume>acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab</volume>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  </usage>
Jan 31 00:02:38 np0005603435 nova_compute[239938]: </secret>
Jan 31 00:02:38 np0005603435 nova_compute[239938]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.625 239942 DEBUG nova.virt.libvirt.vif [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T05:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2059295489',display_name='tempest-TestEncryptedCinderVolumes-server-2059295489',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2059295489',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmr0MUFNJjz18mvNHr0kofSqXL+MOUCKmtJGcrQVuZqzDEVyxUUFebchvjqsqS9tyThgYSCkXKWLzTW0ED0WOyTQNQBDzi5dd8NYQAYU+nK8F6As1qr5NixmuIDexDl8Q==',key_name='tempest-TestEncryptedCinderVolumes-1017268198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-cjyzieeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T05:02:34Z,user_data=None,user_id='6784d92c92b24526a302a1a74a813c76',uuid=bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.626 239942 DEBUG nova.network.os_vif_util [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.627 239942 DEBUG nova.network.os_vif_util [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:0c:6d,bridge_name='br-int',has_traffic_filtering=True,id=ba882408-c3f6-4623-97a6-4d87a99fe278,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba882408-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.629 239942 DEBUG nova.objects.instance [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'pci_devices' on Instance uuid bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.645 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <uuid>bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e</uuid>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <name>instance-0000001d</name>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <memory>131072</memory>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <vcpu>1</vcpu>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <metadata>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-2059295489</nova:name>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <nova:creationTime>2026-01-31 05:02:37</nova:creationTime>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <nova:flavor name="m1.nano">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <nova:memory>128</nova:memory>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <nova:disk>1</nova:disk>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <nova:swap>0</nova:swap>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <nova:vcpus>1</nova:vcpus>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      </nova:flavor>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <nova:owner>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <nova:user uuid="6784d92c92b24526a302a1a74a813c76">tempest-TestEncryptedCinderVolumes-1466370108-project-member</nova:user>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <nova:project uuid="48935f8745744c4ba5400c13f80e0379">tempest-TestEncryptedCinderVolumes-1466370108</nova:project>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      </nova:owner>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <nova:ports>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <nova:port uuid="ba882408-c3f6-4623-97a6-4d87a99fe278">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        </nova:port>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      </nova:ports>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </nova:instance>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  </metadata>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <sysinfo type="smbios">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <system>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <entry name="manufacturer">RDO</entry>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <entry name="product">OpenStack Compute</entry>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <entry name="serial">bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e</entry>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <entry name="uuid">bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e</entry>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <entry name="family">Virtual Machine</entry>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </system>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  </sysinfo>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <os>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <boot dev="hd"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <smbios mode="sysinfo"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  </os>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <features>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <acpi/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <apic/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <vmcoreinfo/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  </features>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <clock offset="utc">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <timer name="hpet" present="no"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  </clock>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <cpu mode="host-model" match="exact">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  </cpu>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  <devices>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <disk type="network" device="cdrom">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <driver type="raw" cache="none"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="vms/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e_disk.config">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <target dev="sda" bus="sata"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <disk type="network" device="disk">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <source protocol="rbd" name="volumes/volume-acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <host name="192.168.122.100" port="6789"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      </source>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <auth username="openstack">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <secret type="ceph" uuid="95d2f419-0dd0-56f2-a094-353f8c7597ed"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      </auth>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <target dev="vda" bus="virtio"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <serial>acd7acf2-5d31-4e31-ad3b-c02a7d50a7ab</serial>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <encryption format="luks">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:        <secret type="passphrase" uuid="1fba71a7-44b6-4e73-830a-171427e5bbcc"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      </encryption>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </disk>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <interface type="ethernet">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <mac address="fa:16:3e:54:0c:6d"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <mtu size="1442"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <target dev="tapba882408-c3"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </interface>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <serial type="pty">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <log file="/var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e/console.log" append="off"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </serial>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <video>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <model type="virtio"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </video>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <input type="tablet" bus="usb"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <rng model="virtio">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <backend model="random">/dev/urandom</backend>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </rng>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <controller type="usb" index="0"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    <memballoon model="virtio">
Jan 31 00:02:38 np0005603435 nova_compute[239938]:      <stats period="10"/>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:    </memballoon>
Jan 31 00:02:38 np0005603435 nova_compute[239938]:  </devices>
Jan 31 00:02:38 np0005603435 nova_compute[239938]: </domain>
Jan 31 00:02:38 np0005603435 nova_compute[239938]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.647 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Preparing to wait for external event network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.648 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.648 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.649 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.650 239942 DEBUG nova.virt.libvirt.vif [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T05:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2059295489',display_name='tempest-TestEncryptedCinderVolumes-server-2059295489',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2059295489',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmr0MUFNJjz18mvNHr0kofSqXL+MOUCKmtJGcrQVuZqzDEVyxUUFebchvjqsqS9tyThgYSCkXKWLzTW0ED0WOyTQNQBDzi5dd8NYQAYU+nK8F6As1qr5NixmuIDexDl8Q==',key_name='tempest-TestEncryptedCinderVolumes-1017268198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-cjyzieeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T05:02:34Z,user_data=None,user_id='6784d92c92b24526a302a1a74a813c76',uuid=bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.651 239942 DEBUG nova.network.os_vif_util [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.652 239942 DEBUG nova.network.os_vif_util [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:0c:6d,bridge_name='br-int',has_traffic_filtering=True,id=ba882408-c3f6-4623-97a6-4d87a99fe278,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba882408-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.653 239942 DEBUG os_vif [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:0c:6d,bridge_name='br-int',has_traffic_filtering=True,id=ba882408-c3f6-4623-97a6-4d87a99fe278,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba882408-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.654 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.654 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.655 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.660 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.661 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba882408-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.661 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba882408-c3, col_values=(('external_ids', {'iface-id': 'ba882408-c3f6-4623-97a6-4d87a99fe278', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:0c:6d', 'vm-uuid': 'bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:38 np0005603435 NetworkManager[49097]: <info>  [1769835758.6644] manager: (tapba882408-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.663 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.665 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.672 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.674 239942 INFO os_vif [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:0c:6d,bridge_name='br-int',has_traffic_filtering=True,id=ba882408-c3f6-4623-97a6-4d87a99fe278,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba882408-c3')#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.718 239942 DEBUG nova.network.neutron [req-9c4fb996-9c26-4b20-b239-204c8c456bf1 req-7789ea8b-37f0-434e-9281-e1f9c941f149 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Updated VIF entry in instance network info cache for port ba882408-c3f6-4623-97a6-4d87a99fe278. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.719 239942 DEBUG nova.network.neutron [req-9c4fb996-9c26-4b20-b239-204c8c456bf1 req-7789ea8b-37f0-434e-9281-e1f9c941f149 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Updating instance_info_cache with network_info: [{"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.726 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.726 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.726 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] No VIF found with MAC fa:16:3e:54:0c:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.727 239942 INFO nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Using config drive#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.752 239942 DEBUG nova.storage.rbd_utils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:02:38 np0005603435 nova_compute[239938]: 2026-01-31 05:02:38.758 239942 DEBUG oslo_concurrency.lockutils [req-9c4fb996-9c26-4b20-b239-204c8c456bf1 req-7789ea8b-37f0-434e-9281-e1f9c941f149 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e504 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2475443506' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:02:38 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2475443506' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.068 239942 INFO nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Creating config drive at /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e/disk.config#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.074 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp90czymkt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.206 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp90czymkt" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.241 239942 DEBUG nova.storage.rbd_utils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] rbd image bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.246 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e/disk.config bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.372 239942 DEBUG oslo_concurrency.processutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e/disk.config bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.373 239942 INFO nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Deleting local config drive /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e/disk.config because it was imported into RBD.#033[00m
Jan 31 00:02:39 np0005603435 kernel: tapba882408-c3: entered promiscuous mode
Jan 31 00:02:39 np0005603435 NetworkManager[49097]: <info>  [1769835759.4283] manager: (tapba882408-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Jan 31 00:02:39 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:39Z|00276|binding|INFO|Claiming lport ba882408-c3f6-4623-97a6-4d87a99fe278 for this chassis.
Jan 31 00:02:39 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:39Z|00277|binding|INFO|ba882408-c3f6-4623-97a6-4d87a99fe278: Claiming fa:16:3e:54:0c:6d 10.100.0.12
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.429 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.438 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:0c:6d 10.100.0.12'], port_security=['fa:16:3e:54:0c:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f25b83f-b794-417e-88e7-d89c680f473d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48935f8745744c4ba5400c13f80e0379', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b59f016-9fba-4b72-aa35-0db4493e20dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94c57d33-0e3a-4b86-87cd-ae1ca9bb064d, chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=ba882408-c3f6-4623-97a6-4d87a99fe278) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.439 156017 INFO neutron.agent.ovn.metadata.agent [-] Port ba882408-c3f6-4623-97a6-4d87a99fe278 in datapath 2f25b83f-b794-417e-88e7-d89c680f473d bound to our chassis#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.440 156017 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f25b83f-b794-417e-88e7-d89c680f473d#033[00m
Jan 31 00:02:39 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:39Z|00278|binding|INFO|Setting lport ba882408-c3f6-4623-97a6-4d87a99fe278 ovn-installed in OVS
Jan 31 00:02:39 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:39Z|00279|binding|INFO|Setting lport ba882408-c3f6-4623-97a6-4d87a99fe278 up in Southbound
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.444 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.448 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.452 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[48201bc7-d6a3-42d7-870d-d4f53652f956]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.454 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f25b83f-b1 in ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 00:02:39 np0005603435 systemd-machined[208030]: New machine qemu-29-instance-0000001d.
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.457 247621 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f25b83f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.457 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ff8b8a-380a-4551-8eb7-e0aca483f528]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.458 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[f77ca1e2-8f5c-4114-b94a-f98bc2cf4bf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.471 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[f7951224-9108-4d17-bb83-7801e243f2a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.496 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[bf619e86-f6d8-4537-bc71-41a66d27aea9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 systemd-udevd[274601]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 00:02:39 np0005603435 NetworkManager[49097]: <info>  [1769835759.5101] device (tapba882408-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 00:02:39 np0005603435 NetworkManager[49097]: <info>  [1769835759.5114] device (tapba882408-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.523 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[8c542115-8315-45fd-84ab-27df41b281e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.527 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1fd89c-6f3c-4be5-b038-af6567ea699b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 NetworkManager[49097]: <info>  [1769835759.5299] manager: (tap2f25b83f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.553 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0b68c8-8bf1-4e84-a72b-f76c26bb8528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.558 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[9934eea6-bfd3-48d2-ad70-f6a9025214b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 NetworkManager[49097]: <info>  [1769835759.5809] device (tap2f25b83f-b0): carrier: link connected
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.586 247914 DEBUG oslo.privsep.daemon [-] privsep: reply[828ee25d-08df-4337-a145-20fe32273851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.603 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6b19cb89-3762-4531-ad7e-ac4f9940cc3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f25b83f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:19:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481398, 'reachable_time': 26980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274631, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 3.3 KiB/s wr, 37 op/s
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.619 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[2e84209b-4451-459b-bcd2-9b213c18bc0d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:1905'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481398, 'tstamp': 481398}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274632, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.626 239942 DEBUG nova.compute.manager [req-d4f0a55f-fbbe-4195-a896-1a705766428c req-59b68806-0021-4743-b469-eaa115790061 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received event network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:39 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.627 239942 DEBUG oslo_concurrency.lockutils [req-d4f0a55f-fbbe-4195-a896-1a705766428c req-59b68806-0021-4743-b469-eaa115790061 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.627 239942 DEBUG oslo_concurrency.lockutils [req-d4f0a55f-fbbe-4195-a896-1a705766428c req-59b68806-0021-4743-b469-eaa115790061 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.628 239942 DEBUG oslo_concurrency.lockutils [req-d4f0a55f-fbbe-4195-a896-1a705766428c req-59b68806-0021-4743-b469-eaa115790061 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.628 239942 DEBUG nova.compute.manager [req-d4f0a55f-fbbe-4195-a896-1a705766428c req-59b68806-0021-4743-b469-eaa115790061 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Processing event network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.638 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c45ef342-7b89-4f5f-8767-3f146672721e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f25b83f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:19:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481398, 'reachable_time': 26980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274633, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.670 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ad028f16-13e0-4b23-a7df-58e1ede3d3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.726 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8cad94-23dc-4023-834e-74f0a9cf3ddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.727 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f25b83f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.727 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.728 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f25b83f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:39 np0005603435 kernel: tap2f25b83f-b0: entered promiscuous mode
Jan 31 00:02:39 np0005603435 NetworkManager[49097]: <info>  [1769835759.7315] manager: (tap2f25b83f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.730 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.736 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f25b83f-b0, col_values=(('external_ids', {'iface-id': '9bf21700-cf87-40d9-96a1-5af6970f25f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:39 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:39Z|00280|binding|INFO|Releasing lport 9bf21700-cf87-40d9-96a1-5af6970f25f7 from this chassis (sb_readonly=0)
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.740 156017 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 00:02:39 np0005603435 nova_compute[239938]: 2026-01-31 05:02:39.737 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.743 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[41cf5c92-8991-4a50-8e9f-14f5c0693bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.743 156017 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: global
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    log         /dev/log local0 debug
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    log-tag     haproxy-metadata-proxy-2f25b83f-b794-417e-88e7-d89c680f473d
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    user        root
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    group       root
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    maxconn     1024
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    pidfile     /var/lib/neutron/external/pids/2f25b83f-b794-417e-88e7-d89c680f473d.pid.haproxy
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    daemon
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: defaults
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    log global
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    mode http
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    option httplog
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    option dontlognull
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    option http-server-close
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    option forwardfor
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    retries                 3
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    timeout http-request    30s
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    timeout connect         30s
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    timeout client          32s
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    timeout server          32s
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    timeout http-keep-alive 30s
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: listen listener
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    bind 169.254.169.254:80
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]:    http-request add-header X-OVN-Network-ID 2f25b83f-b794-417e-88e7-d89c680f473d
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 00:02:39 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:39.744 156017 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'env', 'PROCESS_TAG=haproxy-2f25b83f-b794-417e-88e7-d89c680f473d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f25b83f-b794-417e-88e7-d89c680f473d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 00:02:40 np0005603435 podman[274666]: 2026-01-31 05:02:40.171771974 +0000 UTC m=+0.053132686 container create 46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 00:02:40 np0005603435 systemd[1]: Started libpod-conmon-46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d.scope.
Jan 31 00:02:40 np0005603435 podman[274666]: 2026-01-31 05:02:40.141962289 +0000 UTC m=+0.023323081 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 00:02:40 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:02:40 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d281b54190b7168309665bd250ce6a744aab962b8d246cc822230c4ed11ca6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 00:02:40 np0005603435 podman[274666]: 2026-01-31 05:02:40.266287191 +0000 UTC m=+0.147647903 container init 46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 00:02:40 np0005603435 podman[274666]: 2026-01-31 05:02:40.274270723 +0000 UTC m=+0.155631435 container start 46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 00:02:40 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[274684]: [NOTICE]   (274713) : New worker (274721) forked
Jan 31 00:02:40 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[274684]: [NOTICE]   (274713) : Loading success.
Jan 31 00:02:40 np0005603435 podman[274680]: 2026-01-31 05:02:40.31875162 +0000 UTC m=+0.093799251 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:02:40 np0005603435 podman[274683]: 2026-01-31 05:02:40.348519924 +0000 UTC m=+0.122234363 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 00:02:40 np0005603435 nova_compute[239938]: 2026-01-31 05:02:40.919 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Jan 31 00:02:41 np0005603435 nova_compute[239938]: 2026-01-31 05:02:41.709 239942 DEBUG nova.compute.manager [req-4445117a-60ed-4b79-9e20-3b3d3bacdb4d req-128381be-b214-4c06-81c0-4fcd2cf38560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received event network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:41 np0005603435 nova_compute[239938]: 2026-01-31 05:02:41.709 239942 DEBUG oslo_concurrency.lockutils [req-4445117a-60ed-4b79-9e20-3b3d3bacdb4d req-128381be-b214-4c06-81c0-4fcd2cf38560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:41 np0005603435 nova_compute[239938]: 2026-01-31 05:02:41.709 239942 DEBUG oslo_concurrency.lockutils [req-4445117a-60ed-4b79-9e20-3b3d3bacdb4d req-128381be-b214-4c06-81c0-4fcd2cf38560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:41 np0005603435 nova_compute[239938]: 2026-01-31 05:02:41.710 239942 DEBUG oslo_concurrency.lockutils [req-4445117a-60ed-4b79-9e20-3b3d3bacdb4d req-128381be-b214-4c06-81c0-4fcd2cf38560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:41 np0005603435 nova_compute[239938]: 2026-01-31 05:02:41.710 239942 DEBUG nova.compute.manager [req-4445117a-60ed-4b79-9e20-3b3d3bacdb4d req-128381be-b214-4c06-81c0-4fcd2cf38560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] No waiting events found dispatching network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:02:41 np0005603435 nova_compute[239938]: 2026-01-31 05:02:41.710 239942 WARNING nova.compute.manager [req-4445117a-60ed-4b79-9e20-3b3d3bacdb4d req-128381be-b214-4c06-81c0-4fcd2cf38560 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received unexpected event network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 00:02:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 00:02:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241978391' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.758 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835762.7581153, bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.759 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] VM Started (Lifecycle Event)#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.761 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.765 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.769 239942 INFO nova.virt.libvirt.driver [-] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Instance spawned successfully.#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.769 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.792 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.800 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.807 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.808 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.808 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.809 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.810 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.811 239942 DEBUG nova.virt.libvirt.driver [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.843 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.844 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835762.7608802, bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.844 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] VM Paused (Lifecycle Event)#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.873 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.879 239942 DEBUG nova.virt.driver [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] Emitting event <LifecycleEvent: 1769835762.7642581, bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.880 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] VM Resumed (Lifecycle Event)#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.911 239942 INFO nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Took 7.43 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.911 239942 DEBUG nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.914 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.924 239942 DEBUG nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.976 239942 INFO nova.compute.manager [None req-9ba74639-dffa-44b1-92a8-ffc8972a91ad - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 00:02:42 np0005603435 nova_compute[239938]: 2026-01-31 05:02:42.993 239942 INFO nova.compute.manager [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Took 9.82 seconds to build instance.#033[00m
Jan 31 00:02:43 np0005603435 nova_compute[239938]: 2026-01-31 05:02:43.020 239942 DEBUG oslo_concurrency.lockutils [None req-d69057cf-95e2-42e8-8460-4d5eec655035 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e504 do_prune osdmap full prune enabled
Jan 31 00:02:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e505 e505: 3 total, 3 up, 3 in
Jan 31 00:02:43 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e505: 3 total, 3 up, 3 in
Jan 31 00:02:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 25 KiB/s wr, 74 op/s
Jan 31 00:02:43 np0005603435 nova_compute[239938]: 2026-01-31 05:02:43.664 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e505 do_prune osdmap full prune enabled
Jan 31 00:02:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e506 e506: 3 total, 3 up, 3 in
Jan 31 00:02:44 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e506: 3 total, 3 up, 3 in
Jan 31 00:02:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e506 do_prune osdmap full prune enabled
Jan 31 00:02:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e507 e507: 3 total, 3 up, 3 in
Jan 31 00:02:45 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e507: 3 total, 3 up, 3 in
Jan 31 00:02:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:02:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3143130994' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:02:45 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:02:45 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3143130994' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:02:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 27 KiB/s wr, 118 op/s
Jan 31 00:02:45 np0005603435 nova_compute[239938]: 2026-01-31 05:02:45.922 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:46.896 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ee:eb:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:65:ac:42:c9:7f'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:02:46 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:46.898 156017 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 00:02:46 np0005603435 nova_compute[239938]: 2026-01-31 05:02:46.924 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:46 np0005603435 nova_compute[239938]: 2026-01-31 05:02:46.966 239942 DEBUG nova.compute.manager [req-388309b0-bcbf-4254-98b6-9d3dd83dfaa3 req-6d1ba376-fc8f-40d0-9995-4f2b246a6f6a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received event network-changed-ba882408-c3f6-4623-97a6-4d87a99fe278 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:02:46 np0005603435 nova_compute[239938]: 2026-01-31 05:02:46.967 239942 DEBUG nova.compute.manager [req-388309b0-bcbf-4254-98b6-9d3dd83dfaa3 req-6d1ba376-fc8f-40d0-9995-4f2b246a6f6a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Refreshing instance network info cache due to event network-changed-ba882408-c3f6-4623-97a6-4d87a99fe278. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 00:02:46 np0005603435 nova_compute[239938]: 2026-01-31 05:02:46.967 239942 DEBUG oslo_concurrency.lockutils [req-388309b0-bcbf-4254-98b6-9d3dd83dfaa3 req-6d1ba376-fc8f-40d0-9995-4f2b246a6f6a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 00:02:46 np0005603435 nova_compute[239938]: 2026-01-31 05:02:46.968 239942 DEBUG oslo_concurrency.lockutils [req-388309b0-bcbf-4254-98b6-9d3dd83dfaa3 req-6d1ba376-fc8f-40d0-9995-4f2b246a6f6a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquired lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 00:02:46 np0005603435 nova_compute[239938]: 2026-01-31 05:02:46.968 239942 DEBUG nova.network.neutron [req-388309b0-bcbf-4254-98b6-9d3dd83dfaa3 req-6d1ba376-fc8f-40d0-9995-4f2b246a6f6a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Refreshing network info cache for port ba882408-c3f6-4623-97a6-4d87a99fe278 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 00:02:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e507 do_prune osdmap full prune enabled
Jan 31 00:02:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e508 e508: 3 total, 3 up, 3 in
Jan 31 00:02:47 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e508: 3 total, 3 up, 3 in
Jan 31 00:02:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.8 KiB/s wr, 300 op/s
Jan 31 00:02:48 np0005603435 nova_compute[239938]: 2026-01-31 05:02:48.064 239942 DEBUG nova.network.neutron [req-388309b0-bcbf-4254-98b6-9d3dd83dfaa3 req-6d1ba376-fc8f-40d0-9995-4f2b246a6f6a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Updated VIF entry in instance network info cache for port ba882408-c3f6-4623-97a6-4d87a99fe278. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 00:02:48 np0005603435 nova_compute[239938]: 2026-01-31 05:02:48.065 239942 DEBUG nova.network.neutron [req-388309b0-bcbf-4254-98b6-9d3dd83dfaa3 req-6d1ba376-fc8f-40d0-9995-4f2b246a6f6a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Updating instance_info_cache with network_info: [{"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:02:48 np0005603435 nova_compute[239938]: 2026-01-31 05:02:48.092 239942 DEBUG oslo_concurrency.lockutils [req-388309b0-bcbf-4254-98b6-9d3dd83dfaa3 req-6d1ba376-fc8f-40d0-9995-4f2b246a6f6a c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Releasing lock "refresh_cache-bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 00:02:48 np0005603435 nova_compute[239938]: 2026-01-31 05:02:48.666 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e508 do_prune osdmap full prune enabled
Jan 31 00:02:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e509 e509: 3 total, 3 up, 3 in
Jan 31 00:02:48 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e509: 3 total, 3 up, 3 in
Jan 31 00:02:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.0 KiB/s wr, 247 op/s
Jan 31 00:02:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e509 do_prune osdmap full prune enabled
Jan 31 00:02:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e510 e510: 3 total, 3 up, 3 in
Jan 31 00:02:50 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e510: 3 total, 3 up, 3 in
Jan 31 00:02:50 np0005603435 nova_compute[239938]: 2026-01-31 05:02:50.924 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.7 KiB/s wr, 183 op/s
Jan 31 00:02:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e510 do_prune osdmap full prune enabled
Jan 31 00:02:52 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:52.900 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8e8c9464-4b9f-4423-88e0-e5889c10f4ca, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:02:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e511 e511: 3 total, 3 up, 3 in
Jan 31 00:02:52 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e511: 3 total, 3 up, 3 in
Jan 31 00:02:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:02:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/113054228' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:02:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:02:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/113054228' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:02:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.2 KiB/s wr, 69 op/s
Jan 31 00:02:53 np0005603435 nova_compute[239938]: 2026-01-31 05:02:53.670 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e511 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e511 do_prune osdmap full prune enabled
Jan 31 00:02:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e512 e512: 3 total, 3 up, 3 in
Jan 31 00:02:53 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e512: 3 total, 3 up, 3 in
Jan 31 00:02:54 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:54Z|00071|pinctrl(ovn_pinctrl0)|WARN|Dropped 1 log messages in last 286 seconds (most recently, 286 seconds ago) due to excessive rate
Jan 31 00:02:54 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:54Z|00072|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.12
Jan 31 00:02:54 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:54Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:54:0c:6d 10.100.0.12
Jan 31 00:02:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 275 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 686 KiB/s wr, 116 op/s
Jan 31 00:02:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:55.926 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:02:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:55.926 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:02:55 np0005603435 nova_compute[239938]: 2026-01-31 05:02:55.926 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:02:55.927 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:02:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 283 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 178 op/s
Jan 31 00:02:58 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:58Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.12
Jan 31 00:02:58 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:58Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:54:0c:6d 10.100.0.12
Jan 31 00:02:58 np0005603435 nova_compute[239938]: 2026-01-31 05:02:58.671 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:02:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:02:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e512 do_prune osdmap full prune enabled
Jan 31 00:02:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 e513: 3 total, 3 up, 3 in
Jan 31 00:02:58 np0005603435 ceph-mon[75307]: log_channel(cluster) log [DBG] : osdmap e513: 3 total, 3 up, 3 in
Jan 31 00:02:59 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:59Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:0c:6d 10.100.0.12
Jan 31 00:02:59 np0005603435 ovn_controller[145670]: 2026-01-31T05:02:59Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:0c:6d 10.100.0.12
Jan 31 00:02:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 283 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 31 00:03:00 np0005603435 nova_compute[239938]: 2026-01-31 05:03:00.929 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 287 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 110 op/s
Jan 31 00:03:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 287 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 92 op/s
Jan 31 00:03:03 np0005603435 nova_compute[239938]: 2026-01-31 05:03:03.719 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 287 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 62 op/s
Jan 31 00:03:05 np0005603435 nova_compute[239938]: 2026-01-31 05:03:05.932 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_05:03:06
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.mgr']
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:03:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:03:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 287 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 430 KiB/s wr, 3 op/s
Jan 31 00:03:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 00:03:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:03:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:03:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:03:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:03:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 00:03:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:03:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:03:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:03:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:03:08 np0005603435 nova_compute[239938]: 2026-01-31 05:03:08.721 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 287 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 399 KiB/s wr, 3 op/s
Jan 31 00:03:10 np0005603435 nova_compute[239938]: 2026-01-31 05:03:10.988 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:11 np0005603435 podman[274783]: 2026-01-31 05:03:11.110630185 +0000 UTC m=+0.067025109 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 00:03:11 np0005603435 podman[274784]: 2026-01-31 05:03:11.154652281 +0000 UTC m=+0.111377133 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 00:03:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 295 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 683 KiB/s rd, 789 KiB/s wr, 4 op/s
Jan 31 00:03:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 295 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 449 KiB/s wr, 6 op/s
Jan 31 00:03:13 np0005603435 nova_compute[239938]: 2026-01-31 05:03:13.724 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 295 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 439 KiB/s wr, 4 op/s
Jan 31 00:03:15 np0005603435 nova_compute[239938]: 2026-01-31 05:03:15.994 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:16 np0005603435 ovn_controller[145670]: 2026-01-31T05:03:16Z|00281|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.1241520616298247e-05 of space, bias 1.0, pg target 0.003372456184889474 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003251596502224713 of space, bias 1.0, pg target 0.9754789506674139 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2193083522483894e-06 of space, bias 1.0, pg target 0.00036579250567451684 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666942308911155 of space, bias 1.0, pg target 0.2000826926733465 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.287129865118386e-07 of space, bias 4.0, pg target 0.0009944555838142064 quantized to 16 (current 16)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 00:03:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 295 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 5 op/s
Jan 31 00:03:18 np0005603435 nova_compute[239938]: 2026-01-31 05:03:18.727 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.211 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.212 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.212 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.213 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.213 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.215 239942 INFO nova.compute.manager [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Terminating instance#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.217 239942 DEBUG nova.compute.manager [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 00:03:19 np0005603435 kernel: tapba882408-c3 (unregistering): left promiscuous mode
Jan 31 00:03:19 np0005603435 NetworkManager[49097]: <info>  [1769835799.4967] device (tapba882408-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.534 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.536 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 ovn_controller[145670]: 2026-01-31T05:03:19Z|00282|binding|INFO|Releasing lport ba882408-c3f6-4623-97a6-4d87a99fe278 from this chassis (sb_readonly=0)
Jan 31 00:03:19 np0005603435 ovn_controller[145670]: 2026-01-31T05:03:19Z|00283|binding|INFO|Setting lport ba882408-c3f6-4623-97a6-4d87a99fe278 down in Southbound
Jan 31 00:03:19 np0005603435 ovn_controller[145670]: 2026-01-31T05:03:19Z|00284|binding|INFO|Removing iface tapba882408-c3 ovn-installed in OVS
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.543 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.549 156017 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:0c:6d 10.100.0.12'], port_security=['fa:16:3e:54:0c:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f25b83f-b794-417e-88e7-d89c680f473d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48935f8745744c4ba5400c13f80e0379', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b59f016-9fba-4b72-aa35-0db4493e20dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94c57d33-0e3a-4b86-87cd-ae1ca9bb064d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>], logical_port=ba882408-c3f6-4623-97a6-4d87a99fe278) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f67bc2aab80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.550 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.551 156017 INFO neutron.agent.ovn.metadata.agent [-] Port ba882408-c3f6-4623-97a6-4d87a99fe278 in datapath 2f25b83f-b794-417e-88e7-d89c680f473d unbound from our chassis#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.554 156017 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f25b83f-b794-417e-88e7-d89c680f473d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.555 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c42b1ded-0b97-4629-ae4c-34e4d0a33d45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.556 156017 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d namespace which is not needed anymore#033[00m
Jan 31 00:03:19 np0005603435 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 31 00:03:19 np0005603435 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 15.736s CPU time.
Jan 31 00:03:19 np0005603435 systemd-machined[208030]: Machine qemu-29-instance-0000001d terminated.
Jan 31 00:03:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 295 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 436 KiB/s wr, 4 op/s
Jan 31 00:03:19 np0005603435 kernel: tapba882408-c3: entered promiscuous mode
Jan 31 00:03:19 np0005603435 kernel: tapba882408-c3 (unregistering): left promiscuous mode
Jan 31 00:03:19 np0005603435 NetworkManager[49097]: <info>  [1769835799.6516] manager: (tapba882408-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.656 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.670 239942 INFO nova.virt.libvirt.driver [-] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Instance destroyed successfully.#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.670 239942 DEBUG nova.objects.instance [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lazy-loading 'resources' on Instance uuid bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.681 239942 DEBUG nova.virt.libvirt.vif [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T05:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-2059295489',display_name='tempest-TestEncryptedCinderVolumes-server-2059295489',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-2059295489',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmr0MUFNJjz18mvNHr0kofSqXL+MOUCKmtJGcrQVuZqzDEVyxUUFebchvjqsqS9tyThgYSCkXKWLzTW0ED0WOyTQNQBDzi5dd8NYQAYU+nK8F6As1qr5NixmuIDexDl8Q==',key_name='tempest-TestEncryptedCinderVolumes-1017268198',keypairs=<?>,launch_index=0,launched_at=2026-01-31T05:02:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48935f8745744c4ba5400c13f80e0379',ramdisk_id='',reservation_id='r-cjyzieeq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1466370108',owner_user_name='tempest-TestEncryptedCinderVolumes-1466370108-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T05:02:42Z,user_data=None,user_id='6784d92c92b24526a302a1a74a813c76',uuid=bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.682 239942 DEBUG nova.network.os_vif_util [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converting VIF {"id": "ba882408-c3f6-4623-97a6-4d87a99fe278", "address": "fa:16:3e:54:0c:6d", "network": {"id": "2f25b83f-b794-417e-88e7-d89c680f473d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-685192512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48935f8745744c4ba5400c13f80e0379", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba882408-c3", "ovs_interfaceid": "ba882408-c3f6-4623-97a6-4d87a99fe278", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.682 239942 DEBUG nova.network.os_vif_util [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:0c:6d,bridge_name='br-int',has_traffic_filtering=True,id=ba882408-c3f6-4623-97a6-4d87a99fe278,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba882408-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.682 239942 DEBUG os_vif [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:0c:6d,bridge_name='br-int',has_traffic_filtering=True,id=ba882408-c3f6-4623-97a6-4d87a99fe278,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba882408-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.685 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.686 239942 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba882408-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.690 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.693 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.696 239942 INFO os_vif [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:0c:6d,bridge_name='br-int',has_traffic_filtering=True,id=ba882408-c3f6-4623-97a6-4d87a99fe278,network=Network(2f25b83f-b794-417e-88e7-d89c680f473d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba882408-c3')#033[00m
Jan 31 00:03:19 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[274684]: [NOTICE]   (274713) : haproxy version is 2.8.14-c23fe91
Jan 31 00:03:19 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[274684]: [NOTICE]   (274713) : path to executable is /usr/sbin/haproxy
Jan 31 00:03:19 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[274684]: [WARNING]  (274713) : Exiting Master process...
Jan 31 00:03:19 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[274684]: [WARNING]  (274713) : Exiting Master process...
Jan 31 00:03:19 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[274684]: [ALERT]    (274713) : Current worker (274721) exited with code 143 (Terminated)
Jan 31 00:03:19 np0005603435 neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d[274684]: [WARNING]  (274713) : All workers exited. Exiting... (0)
Jan 31 00:03:19 np0005603435 systemd[1]: libpod-46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d.scope: Deactivated successfully.
Jan 31 00:03:19 np0005603435 podman[274854]: 2026-01-31 05:03:19.744371439 +0000 UTC m=+0.089740487 container died 46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 00:03:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d-userdata-shm.mount: Deactivated successfully.
Jan 31 00:03:19 np0005603435 systemd[1]: var-lib-containers-storage-overlay-f8d281b54190b7168309665bd250ce6a744aab962b8d246cc822230c4ed11ca6-merged.mount: Deactivated successfully.
Jan 31 00:03:19 np0005603435 podman[274854]: 2026-01-31 05:03:19.781930629 +0000 UTC m=+0.127299707 container cleanup 46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.795 239942 DEBUG nova.compute.manager [req-bfc2437a-1a7f-4787-a386-0ffe2236c257 req-13f42237-3885-4d94-9c6b-33fa656accb2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received event network-vif-unplugged-ba882408-c3f6-4623-97a6-4d87a99fe278 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.795 239942 DEBUG oslo_concurrency.lockutils [req-bfc2437a-1a7f-4787-a386-0ffe2236c257 req-13f42237-3885-4d94-9c6b-33fa656accb2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.796 239942 DEBUG oslo_concurrency.lockutils [req-bfc2437a-1a7f-4787-a386-0ffe2236c257 req-13f42237-3885-4d94-9c6b-33fa656accb2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.797 239942 DEBUG oslo_concurrency.lockutils [req-bfc2437a-1a7f-4787-a386-0ffe2236c257 req-13f42237-3885-4d94-9c6b-33fa656accb2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.797 239942 DEBUG nova.compute.manager [req-bfc2437a-1a7f-4787-a386-0ffe2236c257 req-13f42237-3885-4d94-9c6b-33fa656accb2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] No waiting events found dispatching network-vif-unplugged-ba882408-c3f6-4623-97a6-4d87a99fe278 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.797 239942 DEBUG nova.compute.manager [req-bfc2437a-1a7f-4787-a386-0ffe2236c257 req-13f42237-3885-4d94-9c6b-33fa656accb2 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received event network-vif-unplugged-ba882408-c3f6-4623-97a6-4d87a99fe278 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 00:03:19 np0005603435 systemd[1]: libpod-conmon-46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d.scope: Deactivated successfully.
Jan 31 00:03:19 np0005603435 podman[274905]: 2026-01-31 05:03:19.855771329 +0000 UTC m=+0.052602312 container remove 46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.863 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a3f122-59cd-4c34-a4e8-cf201656acb8]: (4, ('Sat Jan 31 05:03:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d (46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d)\n46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d\nSat Jan 31 05:03:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d (46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d)\n46c7a7c04f9885df9cb53975c02279d5088e13a12e0572e684d74f634315410d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.865 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[12fddc70-8842-45ea-aeca-e409987f719e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.866 156017 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f25b83f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 00:03:19 np0005603435 kernel: tap2f25b83f-b0: left promiscuous mode
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.868 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.871 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9fa09a-23a4-4be4-997b-25902762c2e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.873 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.883 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[3644e505-4d2d-40e7-9862-88ed9891d474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.884 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3de730-3543-4541-8184-fac34ff5acb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.899 247621 DEBUG oslo.privsep.daemon [-] privsep: reply[b23ddfda-a720-42db-96e3-69c076dd67a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481392, 'reachable_time': 44849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274921, 'error': None, 'target': 'ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.902 156620 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f25b83f-b794-417e-88e7-d89c680f473d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 00:03:19 np0005603435 systemd[1]: run-netns-ovnmeta\x2d2f25b83f\x2db794\x2d417e\x2d88e7\x2dd89c680f473d.mount: Deactivated successfully.
Jan 31 00:03:19 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:19.902 156620 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee2ee42-e15d-4587-b60f-25533f04b1dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.912 239942 INFO nova.virt.libvirt.driver [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Deleting instance files /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e_del#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.913 239942 INFO nova.virt.libvirt.driver [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Deletion of /var/lib/nova/instances/bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e_del complete#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.968 239942 INFO nova.compute.manager [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.969 239942 DEBUG oslo.service.loopingcall [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.969 239942 DEBUG nova.compute.manager [-] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 00:03:19 np0005603435 nova_compute[239938]: 2026-01-31 05:03:19.970 239942 DEBUG nova.network.neutron [-] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 00:03:20 np0005603435 nova_compute[239938]: 2026-01-31 05:03:20.687 239942 DEBUG nova.network.neutron [-] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 00:03:20 np0005603435 nova_compute[239938]: 2026-01-31 05:03:20.717 239942 INFO nova.compute.manager [-] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Took 0.75 seconds to deallocate network for instance.#033[00m
Jan 31 00:03:20 np0005603435 nova_compute[239938]: 2026-01-31 05:03:20.996 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.024 239942 INFO nova.compute.manager [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Took 0.31 seconds to detach 1 volumes for instance.#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.088 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.089 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.165 239942 DEBUG oslo_concurrency.processutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:03:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 295 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 436 KiB/s wr, 14 op/s
Jan 31 00:03:21 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:03:21 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880901499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.657 239942 DEBUG oslo_concurrency.processutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.662 239942 DEBUG nova.compute.provider_tree [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.678 239942 DEBUG nova.scheduler.client.report [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.700 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.800 239942 INFO nova.scheduler.client.report [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Deleted allocations for instance bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.861 239942 DEBUG oslo_concurrency.lockutils [None req-6ac2a965-48d0-4b3b-8033-d4ee66dec931 6784d92c92b24526a302a1a74a813c76 48935f8745744c4ba5400c13f80e0379 - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.868 239942 DEBUG nova.compute.manager [req-78995f75-84d7-4d43-9a96-2aa784336232 req-c550d9a6-bc8a-45da-b4ac-a2ad65e57c81 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received event network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.869 239942 DEBUG oslo_concurrency.lockutils [req-78995f75-84d7-4d43-9a96-2aa784336232 req-c550d9a6-bc8a-45da-b4ac-a2ad65e57c81 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Acquiring lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.869 239942 DEBUG oslo_concurrency.lockutils [req-78995f75-84d7-4d43-9a96-2aa784336232 req-c550d9a6-bc8a-45da-b4ac-a2ad65e57c81 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.870 239942 DEBUG oslo_concurrency.lockutils [req-78995f75-84d7-4d43-9a96-2aa784336232 req-c550d9a6-bc8a-45da-b4ac-a2ad65e57c81 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] Lock "bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.870 239942 DEBUG nova.compute.manager [req-78995f75-84d7-4d43-9a96-2aa784336232 req-c550d9a6-bc8a-45da-b4ac-a2ad65e57c81 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] No waiting events found dispatching network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.870 239942 WARNING nova.compute.manager [req-78995f75-84d7-4d43-9a96-2aa784336232 req-c550d9a6-bc8a-45da-b4ac-a2ad65e57c81 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received unexpected event network-vif-plugged-ba882408-c3f6-4623-97a6-4d87a99fe278 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 00:03:21 np0005603435 nova_compute[239938]: 2026-01-31 05:03:21.871 239942 DEBUG nova.compute.manager [req-78995f75-84d7-4d43-9a96-2aa784336232 req-c550d9a6-bc8a-45da-b4ac-a2ad65e57c81 c06dc2de56324e84a1d293655210c652 1a54c91c41ff45b9b61e8109519370fd - - default default] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Received event network-vif-deleted-ba882408-c3f6-4623-97a6-4d87a99fe278 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 00:03:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:03:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3510288378' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:03:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:03:23 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3510288378' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:03:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 295 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 5.6 KiB/s wr, 21 op/s
Jan 31 00:03:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:24 np0005603435 nova_compute[239938]: 2026-01-31 05:03:24.690 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:24 np0005603435 nova_compute[239938]: 2026-01-31 05:03:24.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:24 np0005603435 nova_compute[239938]: 2026-01-31 05:03:24.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 279 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.7 KiB/s wr, 23 op/s
Jan 31 00:03:25 np0005603435 nova_compute[239938]: 2026-01-31 05:03:25.997 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:03:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2850814082' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:03:26 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:03:26 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2850814082' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:03:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Jan 31 00:03:27 np0005603435 nova_compute[239938]: 2026-01-31 05:03:27.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:28 np0005603435 podman[275037]: 2026-01-31 05:03:28.009542477 +0000 UTC m=+0.080489202 container exec 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 00:03:28 np0005603435 podman[275058]: 2026-01-31 05:03:28.183406688 +0000 UTC m=+0.051086506 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:03:28 np0005603435 podman[275037]: 2026-01-31 05:03:28.18764432 +0000 UTC m=+0.258591035 container exec_died 01d4b3ad3ca908f18738d0b35604ff03a2be77510994f269e4b62575c6319f79 (image=quay.io/ceph/ceph:v20, name=ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mon-compute-0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 00:03:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:28 np0005603435 nova_compute[239938]: 2026-01-31 05:03:28.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:28 np0005603435 nova_compute[239938]: 2026-01-31 05:03:28.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:28 np0005603435 nova_compute[239938]: 2026-01-31 05:03:28.886 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 00:03:28 np0005603435 nova_compute[239938]: 2026-01-31 05:03:28.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 00:03:28 np0005603435 nova_compute[239938]: 2026-01-31 05:03:28.900 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 00:03:28 np0005603435 nova_compute[239938]: 2026-01-31 05:03:28.901 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 00:03:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 00:03:28 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 00:03:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:03:29 np0005603435 nova_compute[239938]: 2026-01-31 05:03:29.693 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:29 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:03:30 np0005603435 podman[275367]: 2026-01-31 05:03:30.027031976 +0000 UTC m=+0.059483278 container create 7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 00:03:30 np0005603435 systemd[1]: Started libpod-conmon-7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e.scope.
Jan 31 00:03:30 np0005603435 podman[275367]: 2026-01-31 05:03:30.001112084 +0000 UTC m=+0.033563406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:03:30 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:03:30 np0005603435 podman[275367]: 2026-01-31 05:03:30.118373477 +0000 UTC m=+0.150824839 container init 7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 00:03:30 np0005603435 podman[275367]: 2026-01-31 05:03:30.126308018 +0000 UTC m=+0.158759330 container start 7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 00:03:30 np0005603435 podman[275367]: 2026-01-31 05:03:30.1305854 +0000 UTC m=+0.163036752 container attach 7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mahavira, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:03:30 np0005603435 optimistic_mahavira[275383]: 167 167
Jan 31 00:03:30 np0005603435 systemd[1]: libpod-7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e.scope: Deactivated successfully.
Jan 31 00:03:30 np0005603435 conmon[275383]: conmon 7ece8139271bcfd3682c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e.scope/container/memory.events
Jan 31 00:03:30 np0005603435 podman[275367]: 2026-01-31 05:03:30.134381901 +0000 UTC m=+0.166833213 container died 7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mahavira, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 00:03:30 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0b98b8fb08f3c9101a3a580bda8c9b8f5db767bd38527ea00ced8c60b3747e04-merged.mount: Deactivated successfully.
Jan 31 00:03:30 np0005603435 podman[275367]: 2026-01-31 05:03:30.183342336 +0000 UTC m=+0.215793618 container remove 7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mahavira, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 00:03:30 np0005603435 systemd[1]: libpod-conmon-7ece8139271bcfd3682c12697f1f7de01d538b493511c52c49884ae297fb525e.scope: Deactivated successfully.
Jan 31 00:03:30 np0005603435 podman[275407]: 2026-01-31 05:03:30.346383877 +0000 UTC m=+0.061240970 container create 299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jones, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 00:03:30 np0005603435 systemd[1]: Started libpod-conmon-299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1.scope.
Jan 31 00:03:30 np0005603435 podman[275407]: 2026-01-31 05:03:30.322985976 +0000 UTC m=+0.037843119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:03:30 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:03:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654bfc3b69356a60aa29fb00b869c67d83ab67bea0726345d6ee6f6f6b7c177/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654bfc3b69356a60aa29fb00b869c67d83ab67bea0726345d6ee6f6f6b7c177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654bfc3b69356a60aa29fb00b869c67d83ab67bea0726345d6ee6f6f6b7c177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654bfc3b69356a60aa29fb00b869c67d83ab67bea0726345d6ee6f6f6b7c177/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:30 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654bfc3b69356a60aa29fb00b869c67d83ab67bea0726345d6ee6f6f6b7c177/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:30 np0005603435 podman[275407]: 2026-01-31 05:03:30.451923889 +0000 UTC m=+0.166780992 container init 299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 00:03:30 np0005603435 podman[275407]: 2026-01-31 05:03:30.459775118 +0000 UTC m=+0.174632171 container start 299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jones, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 00:03:30 np0005603435 podman[275407]: 2026-01-31 05:03:30.46406243 +0000 UTC m=+0.178919483 container attach 299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jones, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:03:30 np0005603435 festive_jones[275423]: --> passed data devices: 0 physical, 3 LVM
Jan 31 00:03:30 np0005603435 festive_jones[275423]: --> All data devices are unavailable
Jan 31 00:03:30 np0005603435 systemd[1]: libpod-299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1.scope: Deactivated successfully.
Jan 31 00:03:30 np0005603435 podman[275407]: 2026-01-31 05:03:30.903032741 +0000 UTC m=+0.617889834 container died 299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jones, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:03:30 np0005603435 systemd[1]: var-lib-containers-storage-overlay-a654bfc3b69356a60aa29fb00b869c67d83ab67bea0726345d6ee6f6f6b7c177-merged.mount: Deactivated successfully.
Jan 31 00:03:30 np0005603435 podman[275407]: 2026-01-31 05:03:30.956582796 +0000 UTC m=+0.671439849 container remove 299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 31 00:03:30 np0005603435 systemd[1]: libpod-conmon-299a8d05c304773e71eccc61b1c106795342960c083ee0123c53cefb6a120fa1.scope: Deactivated successfully.
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:30.999 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:31 np0005603435 podman[275519]: 2026-01-31 05:03:31.36231879 +0000 UTC m=+0.057270385 container create d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:03:31 np0005603435 systemd[1]: Started libpod-conmon-d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68.scope.
Jan 31 00:03:31 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:03:31 np0005603435 podman[275519]: 2026-01-31 05:03:31.337007462 +0000 UTC m=+0.031959087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:03:31 np0005603435 podman[275519]: 2026-01-31 05:03:31.440157567 +0000 UTC m=+0.135109162 container init d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 31 00:03:31 np0005603435 podman[275519]: 2026-01-31 05:03:31.447954624 +0000 UTC m=+0.142906219 container start d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:03:31 np0005603435 podman[275519]: 2026-01-31 05:03:31.452123254 +0000 UTC m=+0.147074939 container attach d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:03:31 np0005603435 stoic_solomon[275535]: 167 167
Jan 31 00:03:31 np0005603435 systemd[1]: libpod-d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68.scope: Deactivated successfully.
Jan 31 00:03:31 np0005603435 podman[275519]: 2026-01-31 05:03:31.453881616 +0000 UTC m=+0.148833241 container died d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:03:31 np0005603435 systemd[1]: var-lib-containers-storage-overlay-60ab4ef72ba7935c265a045441fe52818e97701631d889dc83968f2f01e684f2-merged.mount: Deactivated successfully.
Jan 31 00:03:31 np0005603435 podman[275519]: 2026-01-31 05:03:31.496356295 +0000 UTC m=+0.191307920 container remove d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 00:03:31 np0005603435 systemd[1]: libpod-conmon-d340e7334b8c7dc76457f54ca6695c9c66ead91817712ad94dfefc4d5ac85f68.scope: Deactivated successfully.
Jan 31 00:03:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.5 KiB/s wr, 44 op/s
Jan 31 00:03:31 np0005603435 podman[275560]: 2026-01-31 05:03:31.678652478 +0000 UTC m=+0.043922684 container create 66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 00:03:31 np0005603435 systemd[1]: Started libpod-conmon-66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21.scope.
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:31.724 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:31 np0005603435 podman[275560]: 2026-01-31 05:03:31.660699538 +0000 UTC m=+0.025969784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:03:31 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:03:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bd5aea5a6977ec835cc79bcd9e9b08c148f7141be340f3041ba770e2518fce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bd5aea5a6977ec835cc79bcd9e9b08c148f7141be340f3041ba770e2518fce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bd5aea5a6977ec835cc79bcd9e9b08c148f7141be340f3041ba770e2518fce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:31 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bd5aea5a6977ec835cc79bcd9e9b08c148f7141be340f3041ba770e2518fce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:31.788 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:31 np0005603435 podman[275560]: 2026-01-31 05:03:31.790915352 +0000 UTC m=+0.156185608 container init 66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 00:03:31 np0005603435 podman[275560]: 2026-01-31 05:03:31.803169456 +0000 UTC m=+0.168439702 container start 66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:03:31 np0005603435 podman[275560]: 2026-01-31 05:03:31.807467519 +0000 UTC m=+0.172737825 container attach 66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:31.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:31.910 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:31.911 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:31.911 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:31.912 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 00:03:31 np0005603435 nova_compute[239938]: 2026-01-31 05:03:31.912 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]: {
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:    "0": [
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:        {
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "devices": [
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "/dev/loop3"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            ],
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_name": "ceph_lv0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_size": "21470642176",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "name": "ceph_lv0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "tags": {
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cluster_name": "ceph",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.crush_device_class": "",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.encrypted": "0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.objectstore": "bluestore",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osd_id": "0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.type": "block",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.vdo": "0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.with_tpm": "0"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            },
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "type": "block",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "vg_name": "ceph_vg0"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:        }
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:    ],
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:    "1": [
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:        {
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "devices": [
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "/dev/loop4"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            ],
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_name": "ceph_lv1",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_size": "21470642176",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "name": "ceph_lv1",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "tags": {
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cluster_name": "ceph",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.crush_device_class": "",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.encrypted": "0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.objectstore": "bluestore",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osd_id": "1",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.type": "block",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.vdo": "0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.with_tpm": "0"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            },
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "type": "block",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "vg_name": "ceph_vg1"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:        }
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:    ],
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:    "2": [
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:        {
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "devices": [
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "/dev/loop5"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            ],
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_name": "ceph_lv2",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_size": "21470642176",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "name": "ceph_lv2",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "tags": {
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.cluster_name": "ceph",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.crush_device_class": "",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.encrypted": "0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.objectstore": "bluestore",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osd_id": "2",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.type": "block",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.vdo": "0",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:                "ceph.with_tpm": "0"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            },
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "type": "block",
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:            "vg_name": "ceph_vg2"
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:        }
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]:    ]
Jan 31 00:03:32 np0005603435 inspiring_poitras[275577]: }
Jan 31 00:03:32 np0005603435 systemd[1]: libpod-66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21.scope: Deactivated successfully.
Jan 31 00:03:32 np0005603435 podman[275607]: 2026-01-31 05:03:32.19102932 +0000 UTC m=+0.034963269 container died 66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 31 00:03:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay-85bd5aea5a6977ec835cc79bcd9e9b08c148f7141be340f3041ba770e2518fce-merged.mount: Deactivated successfully.
Jan 31 00:03:32 np0005603435 podman[275607]: 2026-01-31 05:03:32.240536138 +0000 UTC m=+0.084470107 container remove 66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:03:32 np0005603435 systemd[1]: libpod-conmon-66a1aa03b535a984daa7f7961749e210527541af1b070a1dbf67b9117c427d21.scope: Deactivated successfully.
Jan 31 00:03:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:03:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2334782872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:03:32 np0005603435 nova_compute[239938]: 2026-01-31 05:03:32.509 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:03:32 np0005603435 podman[275687]: 2026-01-31 05:03:32.708556566 +0000 UTC m=+0.034965460 container create 19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 00:03:32 np0005603435 nova_compute[239938]: 2026-01-31 05:03:32.743 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:03:32 np0005603435 nova_compute[239938]: 2026-01-31 05:03:32.745 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4266MB free_disk=59.98775292560458GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 00:03:32 np0005603435 nova_compute[239938]: 2026-01-31 05:03:32.745 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:03:32 np0005603435 nova_compute[239938]: 2026-01-31 05:03:32.746 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:03:32 np0005603435 systemd[1]: Started libpod-conmon-19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4.scope.
Jan 31 00:03:32 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:03:32 np0005603435 podman[275687]: 2026-01-31 05:03:32.784299523 +0000 UTC m=+0.110708497 container init 19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:03:32 np0005603435 podman[275687]: 2026-01-31 05:03:32.789087548 +0000 UTC m=+0.115496472 container start 19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 00:03:32 np0005603435 podman[275687]: 2026-01-31 05:03:32.692507561 +0000 UTC m=+0.018916465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:03:32 np0005603435 competent_haibt[275703]: 167 167
Jan 31 00:03:32 np0005603435 systemd[1]: libpod-19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4.scope: Deactivated successfully.
Jan 31 00:03:32 np0005603435 podman[275687]: 2026-01-31 05:03:32.79335276 +0000 UTC m=+0.119761734 container attach 19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 00:03:32 np0005603435 podman[275687]: 2026-01-31 05:03:32.793821191 +0000 UTC m=+0.120230105 container died 19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 00:03:32 np0005603435 nova_compute[239938]: 2026-01-31 05:03:32.817 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 00:03:32 np0005603435 nova_compute[239938]: 2026-01-31 05:03:32.817 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 00:03:32 np0005603435 systemd[1]: var-lib-containers-storage-overlay-428d0d0d432f1d0d1bc71e98a8a0d146c4b946455245ffe66c7c4ceb111dd7b2-merged.mount: Deactivated successfully.
Jan 31 00:03:32 np0005603435 podman[275687]: 2026-01-31 05:03:32.832037438 +0000 UTC m=+0.158446322 container remove 19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:03:32 np0005603435 nova_compute[239938]: 2026-01-31 05:03:32.835 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:03:32 np0005603435 systemd[1]: libpod-conmon-19fc011dd2fc8c544844b00d49791b42f800ea465c9edde0b0081ed57b35d6c4.scope: Deactivated successfully.
Jan 31 00:03:33 np0005603435 podman[275728]: 2026-01-31 05:03:33.010167821 +0000 UTC m=+0.061571759 container create 683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bohr, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 00:03:33 np0005603435 systemd[1]: Started libpod-conmon-683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d.scope.
Jan 31 00:03:33 np0005603435 podman[275728]: 2026-01-31 05:03:32.985935199 +0000 UTC m=+0.037339197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:03:33 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:03:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f47aba3160c2654cd4e32e6a7d24e489b11f689f4b00f702ea76e902901821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f47aba3160c2654cd4e32e6a7d24e489b11f689f4b00f702ea76e902901821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f47aba3160c2654cd4e32e6a7d24e489b11f689f4b00f702ea76e902901821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:33 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f47aba3160c2654cd4e32e6a7d24e489b11f689f4b00f702ea76e902901821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:03:33 np0005603435 podman[275728]: 2026-01-31 05:03:33.115976549 +0000 UTC m=+0.167380557 container init 683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bohr, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/594341292' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:03:33 np0005603435 podman[275728]: 2026-01-31 05:03:33.125247541 +0000 UTC m=+0.176651489 container start 683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bohr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/594341292' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:03:33 np0005603435 podman[275728]: 2026-01-31 05:03:33.130018136 +0000 UTC m=+0.181422164 container attach 683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2992907341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:03:33 np0005603435 nova_compute[239938]: 2026-01-31 05:03:33.383 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:03:33 np0005603435 nova_compute[239938]: 2026-01-31 05:03:33.390 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:03:33 np0005603435 nova_compute[239938]: 2026-01-31 05:03:33.410 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:03:33 np0005603435 nova_compute[239938]: 2026-01-31 05:03:33.429 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 00:03:33 np0005603435 nova_compute[239938]: 2026-01-31 05:03:33.429 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:03:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Jan 31 00:03:33 np0005603435 lvm[275843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 00:03:33 np0005603435 lvm[275843]: VG ceph_vg0 finished
Jan 31 00:03:33 np0005603435 lvm[275847]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 00:03:33 np0005603435 lvm[275847]: VG ceph_vg1 finished
Jan 31 00:03:33 np0005603435 lvm[275846]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 00:03:33 np0005603435 lvm[275846]: VG ceph_vg2 finished
Jan 31 00:03:33 np0005603435 trusting_bohr[275764]: {}
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:33 np0005603435 systemd[1]: libpod-683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d.scope: Deactivated successfully.
Jan 31 00:03:33 np0005603435 podman[275728]: 2026-01-31 05:03:33.87842409 +0000 UTC m=+0.929828038 container died 683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bohr, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:03:33 np0005603435 systemd[1]: libpod-683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d.scope: Consumed 1.164s CPU time.
Jan 31 00:03:33 np0005603435 systemd[1]: var-lib-containers-storage-overlay-04f47aba3160c2654cd4e32e6a7d24e489b11f689f4b00f702ea76e902901821-merged.mount: Deactivated successfully.
Jan 31 00:03:33 np0005603435 podman[275728]: 2026-01-31 05:03:33.933627504 +0000 UTC m=+0.985031422 container remove 683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:03:33 np0005603435 systemd[1]: libpod-conmon-683447df310b1dad93d11afd1cfbb680e66799c9f85bf599cf03423b279cd15d.scope: Deactivated successfully.
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 00:03:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:34 np0005603435 nova_compute[239938]: 2026-01-31 05:03:34.430 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:34 np0005603435 nova_compute[239938]: 2026-01-31 05:03:34.432 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:03:34 np0005603435 nova_compute[239938]: 2026-01-31 05:03:34.432 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 00:03:34 np0005603435 nova_compute[239938]: 2026-01-31 05:03:34.666 239942 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769835799.6650496, bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 00:03:34 np0005603435 nova_compute[239938]: 2026-01-31 05:03:34.666 239942 INFO nova.compute.manager [-] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] VM Stopped (Lifecycle Event)#033[00m
Jan 31 00:03:34 np0005603435 nova_compute[239938]: 2026-01-31 05:03:34.693 239942 DEBUG nova.compute.manager [None req-8997c619-18f3-4d56-870d-9f03b36230c2 - - - - - -] [instance: bcd60bbb-e5d9-4c8d-a3f9-359d0f5e6a2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 00:03:34 np0005603435 nova_compute[239938]: 2026-01-31 05:03:34.754 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:03:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 938 B/s wr, 26 op/s
Jan 31 00:03:36 np0005603435 nova_compute[239938]: 2026-01-31 05:03:36.001 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:03:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:03:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:03:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:03:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:03:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:03:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 767 B/s wr, 22 op/s
Jan 31 00:03:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 85 B/s wr, 6 op/s
Jan 31 00:03:39 np0005603435 nova_compute[239938]: 2026-01-31 05:03:39.809 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:41 np0005603435 nova_compute[239938]: 2026-01-31 05:03:41.038 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 85 B/s wr, 6 op/s
Jan 31 00:03:42 np0005603435 podman[275887]: 2026-01-31 05:03:42.108251441 +0000 UTC m=+0.068438123 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 00:03:42 np0005603435 podman[275888]: 2026-01-31 05:03:42.201323334 +0000 UTC m=+0.159721873 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 00:03:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 00:03:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:44 np0005603435 nova_compute[239938]: 2026-01-31 05:03:44.834 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:03:46 np0005603435 nova_compute[239938]: 2026-01-31 05:03:46.039 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:03:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:03:49 np0005603435 nova_compute[239938]: 2026-01-31 05:03:49.838 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:51 np0005603435 nova_compute[239938]: 2026-01-31 05:03:51.079 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:03:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:03:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:54 np0005603435 nova_compute[239938]: 2026-01-31 05:03:54.877 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:03:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:55.927 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:03:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:55.927 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:03:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:03:55.928 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:03:56 np0005603435 nova_compute[239938]: 2026-01-31 05:03:56.135 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:03:57 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:03:58 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:03:59 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:03:59 np0005603435 nova_compute[239938]: 2026-01-31 05:03:59.880 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:01 np0005603435 nova_compute[239938]: 2026-01-31 05:04:01.136 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:01 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:03 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:03 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:04 np0005603435 nova_compute[239938]: 2026-01-31 05:04:04.908 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:05 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:06 np0005603435 ovn_controller[145670]: 2026-01-31T05:04:06Z|00285|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Jan 31 00:04:06 np0005603435 nova_compute[239938]: 2026-01-31 05:04:06.138 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_05:04:06
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'images', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'vms']
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:04:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:04:07 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 00:04:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:04:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:04:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:04:07 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:04:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 00:04:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:04:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:04:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:04:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:04:08 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:09 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:09 np0005603435 nova_compute[239938]: 2026-01-31 05:04:09.941 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:11 np0005603435 nova_compute[239938]: 2026-01-31 05:04:11.177 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:11 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:13 np0005603435 podman[275932]: 2026-01-31 05:04:13.104823853 +0000 UTC m=+0.060959964 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 00:04:13 np0005603435 podman[275933]: 2026-01-31 05:04:13.125007517 +0000 UTC m=+0.077464870 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 00:04:13 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:14 np0005603435 nova_compute[239938]: 2026-01-31 05:04:14.946 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:15 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:16 np0005603435 nova_compute[239938]: 2026-01-31 05:04:16.219 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.807126732242677e-06 of space, bias 1.0, pg target 0.002642138019672803 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029091159681203568 of space, bias 1.0, pg target 0.872734790436107 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2170727414265735e-06 of space, bias 1.0, pg target 0.00036512182242797207 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669402130260096 of space, bias 1.0, pg target 0.20008206390780287 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.278901575288092e-07 of space, bias 4.0, pg target 0.000993468189034571 quantized to 16 (current 16)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 00:04:17 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:19 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:19 np0005603435 nova_compute[239938]: 2026-01-31 05:04:19.949 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:21 np0005603435 nova_compute[239938]: 2026-01-31 05:04:21.222 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:21 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:23 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:23 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:24 np0005603435 nova_compute[239938]: 2026-01-31 05:04:24.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:24 np0005603435 nova_compute[239938]: 2026-01-31 05:04:24.953 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:25 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:25 np0005603435 nova_compute[239938]: 2026-01-31 05:04:25.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:26 np0005603435 nova_compute[239938]: 2026-01-31 05:04:26.225 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:27 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:28 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:29 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:29 np0005603435 nova_compute[239938]: 2026-01-31 05:04:29.882 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:29 np0005603435 nova_compute[239938]: 2026-01-31 05:04:29.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:29 np0005603435 nova_compute[239938]: 2026-01-31 05:04:29.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 00:04:29 np0005603435 nova_compute[239938]: 2026-01-31 05:04:29.887 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 00:04:29 np0005603435 nova_compute[239938]: 2026-01-31 05:04:29.957 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:29 np0005603435 nova_compute[239938]: 2026-01-31 05:04:29.977 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 00:04:29 np0005603435 nova_compute[239938]: 2026-01-31 05:04:29.977 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:30 np0005603435 systemd-logind[816]: New session 52 of user zuul.
Jan 31 00:04:30 np0005603435 systemd[1]: Started Session 52 of User zuul.
Jan 31 00:04:30 np0005603435 nova_compute[239938]: 2026-01-31 05:04:30.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:31 np0005603435 nova_compute[239938]: 2026-01-31 05:04:31.225 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:31 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:31 np0005603435 nova_compute[239938]: 2026-01-31 05:04:31.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:31 np0005603435 nova_compute[239938]: 2026-01-31 05:04:31.927 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:04:31 np0005603435 nova_compute[239938]: 2026-01-31 05:04:31.928 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:04:31 np0005603435 nova_compute[239938]: 2026-01-31 05:04:31.928 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:04:31 np0005603435 nova_compute[239938]: 2026-01-31 05:04:31.928 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 00:04:31 np0005603435 nova_compute[239938]: 2026-01-31 05:04:31.929 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:04:32 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:04:32 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1261018061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:04:32 np0005603435 nova_compute[239938]: 2026-01-31 05:04:32.481 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:04:32 np0005603435 nova_compute[239938]: 2026-01-31 05:04:32.666 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:04:32 np0005603435 nova_compute[239938]: 2026-01-31 05:04:32.668 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4322MB free_disk=59.98775292560458GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 00:04:32 np0005603435 nova_compute[239938]: 2026-01-31 05:04:32.668 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:04:32 np0005603435 nova_compute[239938]: 2026-01-31 05:04:32.669 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:04:32 np0005603435 nova_compute[239938]: 2026-01-31 05:04:32.769 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 00:04:32 np0005603435 nova_compute[239938]: 2026-01-31 05:04:32.770 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 00:04:32 np0005603435 nova_compute[239938]: 2026-01-31 05:04:32.797 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:04:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:04:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/136878395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:04:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:04:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/136878395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:04:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:04:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657439274' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:04:33 np0005603435 nova_compute[239938]: 2026-01-31 05:04:33.362 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:04:33 np0005603435 nova_compute[239938]: 2026-01-31 05:04:33.368 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:04:33 np0005603435 nova_compute[239938]: 2026-01-31 05:04:33.386 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:04:33 np0005603435 nova_compute[239938]: 2026-01-31 05:04:33.388 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 00:04:33 np0005603435 nova_compute[239938]: 2026-01-31 05:04:33.388 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:04:33 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:34 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19104 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:04:34 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:04:34 np0005603435 nova_compute[239938]: 2026-01-31 05:04:34.986 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:35 np0005603435 podman[276372]: 2026-01-31 05:04:35.148947482 +0000 UTC m=+0.020965564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:04:35 np0005603435 podman[276372]: 2026-01-31 05:04:35.281885751 +0000 UTC m=+0.153903763 container create 385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 00:04:35 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19106 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:35 np0005603435 nova_compute[239938]: 2026-01-31 05:04:35.388 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:35 np0005603435 nova_compute[239938]: 2026-01-31 05:04:35.388 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:35 np0005603435 nova_compute[239938]: 2026-01-31 05:04:35.389 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 00:04:35 np0005603435 systemd[1]: Started libpod-conmon-385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52.scope.
Jan 31 00:04:35 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:04:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 00:04:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:04:35 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 00:04:35 np0005603435 podman[276372]: 2026-01-31 05:04:35.556465638 +0000 UTC m=+0.428483700 container init 385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_engelbart, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 00:04:35 np0005603435 podman[276372]: 2026-01-31 05:04:35.566996461 +0000 UTC m=+0.439014473 container start 385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_engelbart, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 00:04:35 np0005603435 silly_engelbart[276393]: 167 167
Jan 31 00:04:35 np0005603435 systemd[1]: libpod-385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52.scope: Deactivated successfully.
Jan 31 00:04:35 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:35 np0005603435 podman[276372]: 2026-01-31 05:04:35.682748798 +0000 UTC m=+0.554766880 container attach 385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:04:35 np0005603435 podman[276372]: 2026-01-31 05:04:35.684150871 +0000 UTC m=+0.556168893 container died 385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_engelbart, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 00:04:35 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 00:04:35 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385604021' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 00:04:36 np0005603435 systemd[1]: var-lib-containers-storage-overlay-179ab16a37cd700b4d24ee1baac40e6b9cada6370555e87576e13e0facccb523-merged.mount: Deactivated successfully.
Jan 31 00:04:36 np0005603435 nova_compute[239938]: 2026-01-31 05:04:36.247 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:36 np0005603435 podman[276372]: 2026-01-31 05:04:36.464671436 +0000 UTC m=+1.336689428 container remove 385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_engelbart, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:04:36 np0005603435 systemd[1]: libpod-conmon-385750a0690afb00e929fad99104f5b6701c90f4658eedeebdcee3a864e12a52.scope: Deactivated successfully.
Jan 31 00:04:36 np0005603435 podman[276461]: 2026-01-31 05:04:36.611406116 +0000 UTC m=+0.031627680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:04:36 np0005603435 podman[276461]: 2026-01-31 05:04:36.715042072 +0000 UTC m=+0.135263586 container create e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:04:36 np0005603435 systemd[1]: Started libpod-conmon-e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5.scope.
Jan 31 00:04:36 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:04:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f11c5a57285e0a5c850153c612061c1947e10f309ead1b757929455864cd16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f11c5a57285e0a5c850153c612061c1947e10f309ead1b757929455864cd16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f11c5a57285e0a5c850153c612061c1947e10f309ead1b757929455864cd16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f11c5a57285e0a5c850153c612061c1947e10f309ead1b757929455864cd16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:36 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f11c5a57285e0a5c850153c612061c1947e10f309ead1b757929455864cd16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:36 np0005603435 nova_compute[239938]: 2026-01-31 05:04:36.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:04:36 np0005603435 podman[276461]: 2026-01-31 05:04:36.917760946 +0000 UTC m=+0.337982470 container init e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_gould, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 00:04:36 np0005603435 podman[276461]: 2026-01-31 05:04:36.928329559 +0000 UTC m=+0.348551103 container start e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_gould, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:04:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:04:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:04:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:04:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:04:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:04:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:04:36 np0005603435 podman[276461]: 2026-01-31 05:04:36.999292322 +0000 UTC m=+0.419513826 container attach e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 00:04:37 np0005603435 sweet_gould[276486]: --> passed data devices: 0 physical, 3 LVM
Jan 31 00:04:37 np0005603435 sweet_gould[276486]: --> All data devices are unavailable
Jan 31 00:04:37 np0005603435 systemd[1]: libpod-e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5.scope: Deactivated successfully.
Jan 31 00:04:37 np0005603435 podman[276461]: 2026-01-31 05:04:37.43529374 +0000 UTC m=+0.855515274 container died e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_gould, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 00:04:37 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:37 np0005603435 systemd[1]: var-lib-containers-storage-overlay-56f11c5a57285e0a5c850153c612061c1947e10f309ead1b757929455864cd16-merged.mount: Deactivated successfully.
Jan 31 00:04:37 np0005603435 podman[276461]: 2026-01-31 05:04:37.898898592 +0000 UTC m=+1.319120096 container remove e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_gould, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 00:04:38 np0005603435 systemd[1]: libpod-conmon-e9e8027c3834d75e333ccabb8d3044588e2532b8be66b0d5046880146c994dd5.scope: Deactivated successfully.
Jan 31 00:04:38 np0005603435 podman[276582]: 2026-01-31 05:04:38.363849486 +0000 UTC m=+0.078129505 container create 2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_chebyshev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 00:04:38 np0005603435 podman[276582]: 2026-01-31 05:04:38.320073156 +0000 UTC m=+0.034353225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:04:38 np0005603435 systemd[1]: Started libpod-conmon-2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9.scope.
Jan 31 00:04:38 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:04:38 np0005603435 podman[276582]: 2026-01-31 05:04:38.589638073 +0000 UTC m=+0.303918162 container init 2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:04:38 np0005603435 podman[276582]: 2026-01-31 05:04:38.598632058 +0000 UTC m=+0.312912077 container start 2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_chebyshev, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:04:38 np0005603435 quizzical_chebyshev[276601]: 167 167
Jan 31 00:04:38 np0005603435 systemd[1]: libpod-2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9.scope: Deactivated successfully.
Jan 31 00:04:38 np0005603435 conmon[276601]: conmon 2e3d0f8d2c13faa098b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9.scope/container/memory.events
Jan 31 00:04:38 np0005603435 podman[276582]: 2026-01-31 05:04:38.610685138 +0000 UTC m=+0.324965247 container attach 2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 00:04:38 np0005603435 podman[276582]: 2026-01-31 05:04:38.611295832 +0000 UTC m=+0.325575901 container died 2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 00:04:38 np0005603435 systemd[1]: var-lib-containers-storage-overlay-73e44ed3e07635b05cf69a0b91e25462c8c0f36a97325e28dc5a724bc60989b7-merged.mount: Deactivated successfully.
Jan 31 00:04:38 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:39 np0005603435 podman[276582]: 2026-01-31 05:04:39.065393476 +0000 UTC m=+0.779673505 container remove 2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_chebyshev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Jan 31 00:04:39 np0005603435 systemd[1]: libpod-conmon-2e3d0f8d2c13faa098b628cd05097d5ac6563dd4c69d67ebea50b55ce47604b9.scope: Deactivated successfully.
Jan 31 00:04:39 np0005603435 podman[276639]: 2026-01-31 05:04:39.279997825 +0000 UTC m=+0.073366262 container create 9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 00:04:39 np0005603435 podman[276639]: 2026-01-31 05:04:39.240682981 +0000 UTC m=+0.034051468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:04:39 np0005603435 systemd[1]: Started libpod-conmon-9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd.scope.
Jan 31 00:04:39 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:04:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94faaf7e224677053c271f3a47aaf2f92ab8daf038d3b6bb4f0faef00c402a3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94faaf7e224677053c271f3a47aaf2f92ab8daf038d3b6bb4f0faef00c402a3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94faaf7e224677053c271f3a47aaf2f92ab8daf038d3b6bb4f0faef00c402a3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:39 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94faaf7e224677053c271f3a47aaf2f92ab8daf038d3b6bb4f0faef00c402a3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:39 np0005603435 podman[276639]: 2026-01-31 05:04:39.613919815 +0000 UTC m=+0.407288272 container init 9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:04:39 np0005603435 podman[276639]: 2026-01-31 05:04:39.623856214 +0000 UTC m=+0.417224661 container start 9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hermann, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 00:04:39 np0005603435 podman[276639]: 2026-01-31 05:04:39.660110574 +0000 UTC m=+0.453479081 container attach 9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hermann, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 00:04:39 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]: {
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:    "0": [
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:        {
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "devices": [
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "/dev/loop3"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            ],
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_name": "ceph_lv0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_size": "21470642176",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=67a07621-a454-4b93-966d-529cdb301722,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "name": "ceph_lv0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "tags": {
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.block_uuid": "m4toSb-2h3u-4BdJ-BFa9-NXvl-JL2S-EM9hRo",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cluster_name": "ceph",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.crush_device_class": "",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.encrypted": "0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.objectstore": "bluestore",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osd_fsid": "67a07621-a454-4b93-966d-529cdb301722",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osd_id": "0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.type": "block",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.vdo": "0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.with_tpm": "0"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            },
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "type": "block",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "vg_name": "ceph_vg0"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:        }
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:    ],
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:    "1": [
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:        {
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "devices": [
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "/dev/loop4"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            ],
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_name": "ceph_lv1",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_size": "21470642176",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "name": "ceph_lv1",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "tags": {
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.block_uuid": "lUmoqY-ZUcl-Bgfb-3v3L-oZ5s-J74O-QAReZb",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cluster_name": "ceph",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.crush_device_class": "",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.encrypted": "0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.objectstore": "bluestore",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osd_fsid": "0e0cfa04-8eda-4248-a4b9-11ba0c14a9b2",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osd_id": "1",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.type": "block",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.vdo": "0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.with_tpm": "0"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            },
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "type": "block",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "vg_name": "ceph_vg1"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:        }
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:    ],
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:    "2": [
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:        {
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "devices": [
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "/dev/loop5"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            ],
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_name": "ceph_lv2",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_size": "21470642176",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=95d2f419-0dd0-56f2-a094-353f8c7597ed,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4ecd8bd6-f445-4b7a-858d-58ed6f88b29e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "lv_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "name": "ceph_lv2",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "tags": {
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.block_uuid": "HwkXHK-yLyA-B0Go-A4aD-SPdG-Qpb4-0Smv6L",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cephx_lockbox_secret": "",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cluster_fsid": "95d2f419-0dd0-56f2-a094-353f8c7597ed",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.cluster_name": "ceph",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.crush_device_class": "",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.encrypted": "0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.objectstore": "bluestore",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osd_fsid": "4ecd8bd6-f445-4b7a-858d-58ed6f88b29e",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osd_id": "2",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.type": "block",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.vdo": "0",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:                "ceph.with_tpm": "0"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            },
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "type": "block",
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:            "vg_name": "ceph_vg2"
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:        }
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]:    ]
Jan 31 00:04:39 np0005603435 affectionate_hermann[276658]: }
Jan 31 00:04:39 np0005603435 systemd[1]: libpod-9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd.scope: Deactivated successfully.
Jan 31 00:04:39 np0005603435 podman[276639]: 2026-01-31 05:04:39.925874289 +0000 UTC m=+0.719242736 container died 9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 00:04:40 np0005603435 nova_compute[239938]: 2026-01-31 05:04:40.018 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:40 np0005603435 systemd[1]: var-lib-containers-storage-overlay-94faaf7e224677053c271f3a47aaf2f92ab8daf038d3b6bb4f0faef00c402a3a-merged.mount: Deactivated successfully.
Jan 31 00:04:40 np0005603435 podman[276639]: 2026-01-31 05:04:40.185708933 +0000 UTC m=+0.979077380 container remove 9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 00:04:40 np0005603435 systemd[1]: libpod-conmon-9e15decc828eb9718833e190a752e1262500fd3571e7eccb06662013062217dd.scope: Deactivated successfully.
Jan 31 00:04:40 np0005603435 podman[276750]: 2026-01-31 05:04:40.685589415 +0000 UTC m=+0.051242400 container create 6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_johnson, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 00:04:40 np0005603435 systemd[1]: Started libpod-conmon-6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53.scope.
Jan 31 00:04:40 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:04:40 np0005603435 podman[276750]: 2026-01-31 05:04:40.662128462 +0000 UTC m=+0.027781527 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:04:40 np0005603435 podman[276750]: 2026-01-31 05:04:40.818209256 +0000 UTC m=+0.183862321 container init 6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_johnson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 00:04:40 np0005603435 podman[276750]: 2026-01-31 05:04:40.826060075 +0000 UTC m=+0.191713090 container start 6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_johnson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 00:04:40 np0005603435 naughty_johnson[276767]: 167 167
Jan 31 00:04:40 np0005603435 systemd[1]: libpod-6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53.scope: Deactivated successfully.
Jan 31 00:04:41 np0005603435 podman[276750]: 2026-01-31 05:04:41.064068114 +0000 UTC m=+0.429721129 container attach 6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_johnson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:04:41 np0005603435 podman[276750]: 2026-01-31 05:04:41.065461567 +0000 UTC m=+0.431114642 container died 6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_johnson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 00:04:41 np0005603435 nova_compute[239938]: 2026-01-31 05:04:41.256 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:41 np0005603435 systemd[1]: var-lib-containers-storage-overlay-54edeef27dbf63f6d189c087a30fded598ef1acb0b2b81e81c0b798d882395f9-merged.mount: Deactivated successfully.
Jan 31 00:04:41 np0005603435 podman[276750]: 2026-01-31 05:04:41.620890642 +0000 UTC m=+0.986543617 container remove 6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:04:41 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:41 np0005603435 systemd[1]: libpod-conmon-6b9a1970ce89c647744c435b1a77a85ce97a479c599cbf8d4748a99017c91b53.scope: Deactivated successfully.
Jan 31 00:04:41 np0005603435 podman[276806]: 2026-01-31 05:04:41.855290995 +0000 UTC m=+0.110659856 container create be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:04:41 np0005603435 podman[276806]: 2026-01-31 05:04:41.770928911 +0000 UTC m=+0.026297792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 00:04:42 np0005603435 systemd[1]: Started libpod-conmon-be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d.scope.
Jan 31 00:04:42 np0005603435 systemd[1]: Started libcrun container.
Jan 31 00:04:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c075f6da00afeded1e7b9a3eb06e2a685ca4c899dddc4822a48c6e64a882582/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c075f6da00afeded1e7b9a3eb06e2a685ca4c899dddc4822a48c6e64a882582/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c075f6da00afeded1e7b9a3eb06e2a685ca4c899dddc4822a48c6e64a882582/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:42 np0005603435 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c075f6da00afeded1e7b9a3eb06e2a685ca4c899dddc4822a48c6e64a882582/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 00:04:42 np0005603435 podman[276806]: 2026-01-31 05:04:42.540488093 +0000 UTC m=+0.795857044 container init be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 00:04:42 np0005603435 podman[276806]: 2026-01-31 05:04:42.559422967 +0000 UTC m=+0.814791848 container start be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bohr, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 00:04:42 np0005603435 podman[276806]: 2026-01-31 05:04:42.621925046 +0000 UTC m=+0.877293927 container attach be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 00:04:43 np0005603435 lvm[276928]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 00:04:43 np0005603435 lvm[276927]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 00:04:43 np0005603435 lvm[276928]: VG ceph_vg1 finished
Jan 31 00:04:43 np0005603435 lvm[276927]: VG ceph_vg0 finished
Jan 31 00:04:43 np0005603435 lvm[276929]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 00:04:43 np0005603435 lvm[276929]: VG ceph_vg2 finished
Jan 31 00:04:43 np0005603435 podman[276901]: 2026-01-31 05:04:43.209140824 +0000 UTC m=+0.047046850 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 00:04:43 np0005603435 lvm[276947]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 00:04:43 np0005603435 lvm[276947]: VG ceph_vg0 finished
Jan 31 00:04:43 np0005603435 upbeat_bohr[276824]: {}
Jan 31 00:04:43 np0005603435 podman[276903]: 2026-01-31 05:04:43.2919526 +0000 UTC m=+0.126176808 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 00:04:43 np0005603435 systemd[1]: libpod-be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d.scope: Deactivated successfully.
Jan 31 00:04:43 np0005603435 systemd[1]: libpod-be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d.scope: Consumed 1.077s CPU time.
Jan 31 00:04:43 np0005603435 conmon[276824]: conmon be58eae4ad2082e7f965 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d.scope/container/memory.events
Jan 31 00:04:43 np0005603435 podman[276956]: 2026-01-31 05:04:43.383492906 +0000 UTC m=+0.055645426 container died be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bohr, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 00:04:43 np0005603435 systemd[1]: var-lib-containers-storage-overlay-0c075f6da00afeded1e7b9a3eb06e2a685ca4c899dddc4822a48c6e64a882582-merged.mount: Deactivated successfully.
Jan 31 00:04:43 np0005603435 podman[276956]: 2026-01-31 05:04:43.548334781 +0000 UTC m=+0.220487261 container remove be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 00:04:43 np0005603435 systemd[1]: libpod-conmon-be58eae4ad2082e7f9651c594af37ce053d6c380f2d1257020d282a2b33f198d.scope: Deactivated successfully.
Jan 31 00:04:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 00:04:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:04:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 00:04:43 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:43 np0005603435 ceph-mon[75307]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:04:43 np0005603435 ovs-vsctl[276997]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 00:04:43 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:44 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:04:44 np0005603435 ceph-mon[75307]: from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' 
Jan 31 00:04:44 np0005603435 virtqemud[240256]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 00:04:44 np0005603435 virtqemud[240256]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 00:04:44 np0005603435 virtqemud[240256]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 00:04:45 np0005603435 nova_compute[239938]: 2026-01-31 05:04:45.020 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:45 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: cache status {prefix=cache status} (starting...)
Jan 31 00:04:45 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: client ls {prefix=client ls} (starting...)
Jan 31 00:04:45 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:45 np0005603435 lvm[277379]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 00:04:45 np0005603435 lvm[277379]: VG ceph_vg2 finished
Jan 31 00:04:45 np0005603435 lvm[277382]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 00:04:45 np0005603435 lvm[277382]: VG ceph_vg0 finished
Jan 31 00:04:45 np0005603435 lvm[277385]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 00:04:45 np0005603435 lvm[277385]: VG ceph_vg1 finished
Jan 31 00:04:45 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19110 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:46 np0005603435 nova_compute[239938]: 2026-01-31 05:04:46.258 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:46 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 00:04:46 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19112 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:46 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 00:04:46 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 00:04:46 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 00:04:46 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 00:04:46 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19114 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:46 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 00:04:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 31 00:04:47 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302543742' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 00:04:47 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 00:04:47 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 00:04:47 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19118 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:47 np0005603435 ceph-mgr[75599]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 00:04:47 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: 2026-01-31T05:04:47.469+0000 7f77961f6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 00:04:47 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: ops {prefix=ops} (starting...)
Jan 31 00:04:47 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:04:47 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363548800' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:04:47 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:48 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: session ls {prefix=session ls} (starting...)
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051826029' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398431761' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 00:04:48 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: status {prefix=status} (starting...)
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932802171' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4222278350' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 00:04:48 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:49 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19132 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 00:04:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2889392208' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 00:04:49 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19134 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:49 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 00:04:49 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1492558125' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 00:04:50 np0005603435 nova_compute[239938]: 2026-01-31 05:04:50.023 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 31 00:04:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717227366' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 00:04:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 00:04:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499252677' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 00:04:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 31 00:04:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231933986' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 00:04:50 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 00:04:50 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/471783560' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 00:04:51 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19146 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:51 np0005603435 ceph-mgr[75599]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 00:04:51 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: 2026-01-31T05:04:51.067+0000 7f77961f6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 00:04:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 00:04:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141064639' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 00:04:51 np0005603435 nova_compute[239938]: 2026-01-31 05:04:51.259 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:51 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 31 00:04:51 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150053537' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 00:04:51 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19152 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 8978432 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 8978432 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x5447d9/0x61c000, compress 0x0/0x0/0x0, omap 0x137c3, meta 0x2bbc83d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 8978432 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045957 data_alloc: 218103808 data_used: 16637
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x5447d9/0x61c000, compress 0x0/0x0/0x0, omap 0x137c3, meta 0x2bbc83d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 140 ms_handle_reset con 0x561118cfd400 session 0x56111ae58700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 8978432 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 141 ms_handle_reset con 0x56111a593c00 session 0x561118d881c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 8962048 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 141 ms_handle_reset con 0x56111ad4d000 session 0x561118b87a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fca07000/0x0/0x4ffc00000, data 0x547f59/0x623000, compress 0x0/0x0/0x0, omap 0x13dc5, meta 0x2bbc23b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 142 ms_handle_reset con 0x56111a592800 session 0x56111990efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 142 ms_handle_reset con 0x56111b870c00 session 0x56111b55e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 142 ms_handle_reset con 0x561118cfd400 session 0x56111b594a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 7905280 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 7905280 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 142 handle_osd_map epochs [144,144], i have 142, src has [1,144]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 142 handle_osd_map epochs [143,144], i have 142, src has [1,144]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.016383171s of 11.337433815s, submitted: 44
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 144 ms_handle_reset con 0x56111b871000 session 0x561118ca68c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 144 ms_handle_reset con 0x56111b871800 session 0x56111b51d500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 144 ms_handle_reset con 0x56111a592800 session 0x561118ca6fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 8003584 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 145 ms_handle_reset con 0x56111a593c00 session 0x5611187a6000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062112 data_alloc: 218103808 data_used: 17250
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 145 ms_handle_reset con 0x56111ad4d800 session 0x561118b87dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 145 ms_handle_reset con 0x561118cfd400 session 0x56111b39dc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 7872512 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fc9f9000/0x0/0x4ffc00000, data 0x54ee3d/0x62e000, compress 0x0/0x0/0x0, omap 0x14635, meta 0x2bbb9cb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 7872512 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 7872512 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 7872512 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 147 ms_handle_reset con 0x56111a592800 session 0x56111b51dc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 147 ms_handle_reset con 0x56111b871000 session 0x56111b39ce00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 147 ms_handle_reset con 0x56111b871800 session 0x56111990f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fc9ef000/0x0/0x4ffc00000, data 0x5525e5/0x634000, compress 0x0/0x0/0x0, omap 0x14b3d, meta 0x2bbb4c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 7979008 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068408 data_alloc: 218103808 data_used: 17250
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 7979008 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 148 ms_handle_reset con 0x561118cfd400 session 0x561118b86540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 149 ms_handle_reset con 0x56111a592800 session 0x56111981a380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 6922240 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 149 ms_handle_reset con 0x56111ad4d800 session 0x56111b51d880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 150 ms_handle_reset con 0x56111b871000 session 0x561119510700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 6758400 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 150 heartbeat osd_stat(store_statfs(0x4fc9ef000/0x0/0x4ffc00000, data 0x555d8d/0x63a000, compress 0x0/0x0/0x0, omap 0x15173, meta 0x2bbae8d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 151 ms_handle_reset con 0x56111ad4d000 session 0x5611187a7180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 6750208 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.984777451s of 10.458037376s, submitted: 94
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 151 ms_handle_reset con 0x561118cfd400 session 0x56111b594540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x559535/0x640000, compress 0x0/0x0/0x0, omap 0x157a4, meta 0x2bba85c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 5685248 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 153 ms_handle_reset con 0x56111a592800 session 0x56111b51a000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 153 ms_handle_reset con 0x56111ad4d800 session 0x56111b39d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086423 data_alloc: 218103808 data_used: 19061
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 153 ms_handle_reset con 0x56111b871000 session 0x56111a52b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 5545984 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 153 ms_handle_reset con 0x56111a567000 session 0x561118b86380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 154 ms_handle_reset con 0x561118cfd400 session 0x56111990e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 5496832 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 154 ms_handle_reset con 0x56111a592800 session 0x56111981b500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 5455872 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 154 ms_handle_reset con 0x56111ad4d800 session 0x56111b55ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fc9e1000/0x0/0x4ffc00000, data 0x55e7b0/0x649000, compress 0x0/0x0/0x0, omap 0x16929, meta 0x2bb96d7), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 154 ms_handle_reset con 0x56111b871000 session 0x5611198328c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 5578752 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 154 handle_osd_map epochs [154,155], i have 155, src has [1,155]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 5578752 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090525 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 156 ms_handle_reset con 0x56111b7bc000 session 0x56111b51c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 5545984 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fc9d8000/0x0/0x4ffc00000, data 0x561ebd/0x650000, compress 0x0/0x0/0x0, omap 0x16e44, meta 0x2bb91bc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 5513216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 156 ms_handle_reset con 0x561118cfd400 session 0x5611187a7500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 5513216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 5513216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 5513216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091982 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 5513216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fc9dd000/0x0/0x4ffc00000, data 0x561e5b/0x64f000, compress 0x0/0x0/0x0, omap 0x16e44, meta 0x2bb91bc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.650153160s of 11.992671013s, submitted: 88
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 5513216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d8000/0x0/0x4ffc00000, data 0x5638fa/0x652000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 5513216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 5513216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 ms_handle_reset con 0x56111a592800 session 0x56111b595500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097228 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d7000/0x0/0x4ffc00000, data 0x563909/0x653000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 ms_handle_reset con 0x56111ad4d800 session 0x56111b55f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 ms_handle_reset con 0x56111b871000 session 0x56111b55f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d9000/0x0/0x4ffc00000, data 0x5638fa/0x652000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095760 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d9000/0x0/0x4ffc00000, data 0x5638fa/0x652000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d9000/0x0/0x4ffc00000, data 0x5638fa/0x652000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095760 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d9000/0x0/0x4ffc00000, data 0x5638fa/0x652000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095760 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d9000/0x0/0x4ffc00000, data 0x5638fa/0x652000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d9000/0x0/0x4ffc00000, data 0x5638fa/0x652000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095760 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.897403717s of 23.968004227s, submitted: 12
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 ms_handle_reset con 0x56111b7bc400 session 0x56111b51d340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fc9d9000/0x0/0x4ffc00000, data 0x5638fa/0x652000, compress 0x0/0x0/0x0, omap 0x17087, meta 0x2bb8f79), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 5505024 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 158 ms_handle_reset con 0x561118cfd400 session 0x56111990ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 5488640 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fc9d4000/0x0/0x4ffc00000, data 0x5654a6/0x656000, compress 0x0/0x0/0x0, omap 0x17783, meta 0x2bb887d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 5488640 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 5488640 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101675 data_alloc: 218103808 data_used: 20303
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 158 ms_handle_reset con 0x56111a592800 session 0x56111b5956c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4431872 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 158 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 159 ms_handle_reset con 0x56111ad4d800 session 0x56111b195dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fc9d1000/0x0/0x4ffc00000, data 0x567096/0x659000, compress 0x0/0x0/0x0, omap 0x17dc4, meta 0x2bb823c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4431872 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 159 ms_handle_reset con 0x56111b871000 session 0x56111b51a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fc9d1000/0x0/0x4ffc00000, data 0x567096/0x659000, compress 0x0/0x0/0x0, omap 0x17e5b, meta 0x2bb81a5), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 159 ms_handle_reset con 0x56111b7aec00 session 0x561118d89180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102253 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fc9d4000/0x0/0x4ffc00000, data 0x567086/0x658000, compress 0x0/0x0/0x0, omap 0x17fb5, meta 0x2bb804b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102253 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.247079849s of 14.906176567s, submitted: 48
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4546560 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 4538368 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fc9cf000/0x0/0x4ffc00000, data 0x568b05/0x65b000, compress 0x0/0x0/0x0, omap 0x18318, meta 0x2bb7ce8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 ms_handle_reset con 0x561118cfd400 session 0x56111a52a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fc9cf000/0x0/0x4ffc00000, data 0x568b05/0x65b000, compress 0x0/0x0/0x0, omap 0x18318, meta 0x2bb7ce8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 ms_handle_reset con 0x56111a592800 session 0x56111b39d880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 4497408 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 4497408 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fc9cf000/0x0/0x4ffc00000, data 0x568b77/0x65d000, compress 0x0/0x0/0x0, omap 0x18566, meta 0x2bb7a9a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 ms_handle_reset con 0x56111ad4d800 session 0x5611187a7880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4489216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113087 data_alloc: 218103808 data_used: 20287
Jan 31 00:04:51 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fc9cf000/0x0/0x4ffc00000, data 0x568b77/0x65d000, compress 0x0/0x0/0x0, omap 0x1873f, meta 0x2bb78c1), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4489216 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 ms_handle_reset con 0x56111b871000 session 0x56111b51a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 4464640 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4177920 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 ms_handle_reset con 0x56111b871800 session 0x56111b51ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fc9ce000/0x0/0x4ffc00000, data 0x568bd9/0x65e000, compress 0x0/0x0/0x0, omap 0x1873f, meta 0x2bb78c1), peers [0,1] op hist [0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4169728 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4333568 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 162 ms_handle_reset con 0x56111b192800 session 0x56111b51ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124678 data_alloc: 218103808 data_used: 20385
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.605259895s of 10.136058807s, submitted: 36
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 162 ms_handle_reset con 0x56111b871800 session 0x56111ae59c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4325376 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 163 ms_handle_reset con 0x561118cfd400 session 0x56111990fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 4259840 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111ad4d800 session 0x56111981b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111a592800 session 0x56111b55f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111a592800 session 0x56111b594e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111b7af000 session 0x56111b39c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4227072 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111b871000 session 0x561119511dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4235264 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x561118cfd400 session 0x56111b194a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fc9bf000/0x0/0x4ffc00000, data 0x56ff7d/0x66b000, compress 0x0/0x0/0x0, omap 0x194ed, meta 0x2bb6b13), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4218880 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129596 data_alloc: 218103808 data_used: 20986
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4218880 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111b192800 session 0x561119549c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111ad4d800 session 0x56111ac5b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4210688 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111a592800 session 0x56111b51a000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x561118cfd400 session 0x56111b55e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 ms_handle_reset con 0x56111b871000 session 0x561119832000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4227072 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fc9c2000/0x0/0x4ffc00000, data 0x56ff1b/0x66a000, compress 0x0/0x0/0x0, omap 0x1940c, meta 0x2bb6bf4), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4227072 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 165 ms_handle_reset con 0x56111b871800 session 0x56111b51b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 165 ms_handle_reset con 0x561118cfd400 session 0x56111a5dba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 165 ms_handle_reset con 0x56111b7af000 session 0x56111b39c8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4227072 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134897 data_alloc: 218103808 data_used: 22159
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.463372231s of 10.023789406s, submitted: 62
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fc9bd000/0x0/0x4ffc00000, data 0x571ad3/0x66d000, compress 0x0/0x0/0x0, omap 0x196ce, meta 0x2bb6932), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4227072 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 166 ms_handle_reset con 0x56111a592800 session 0x561119549a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4227072 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fc9ba000/0x0/0x4ffc00000, data 0x573552/0x670000, compress 0x0/0x0/0x0, omap 0x19975, meta 0x2bb668b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4227072 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fc9b5000/0x0/0x4ffc00000, data 0x575142/0x673000, compress 0x0/0x0/0x0, omap 0x19c3a, meta 0x2bb63c6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4390912 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 4374528 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139981 data_alloc: 218103808 data_used: 22159
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 168 ms_handle_reset con 0x56111ad4d800 session 0x56111b7b0380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 168 ms_handle_reset con 0x56111b871000 session 0x56111a52ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 4374528 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fc9b5000/0x0/0x4ffc00000, data 0x576857/0x675000, compress 0x0/0x0/0x0, omap 0x1a2ca, meta 0x2bb5d36), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4349952 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 168 heartbeat osd_stat(store_statfs(0x4fc9b7000/0x0/0x4ffc00000, data 0x576857/0x675000, compress 0x0/0x0/0x0, omap 0x1a3e8, meta 0x2bb5c18), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 168 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 169 ms_handle_reset con 0x561118cfd400 session 0x561119832380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 169 ms_handle_reset con 0x56111a592800 session 0x5611187a61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4333568 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4333568 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 169 ms_handle_reset con 0x56111ad4d800 session 0x56111b195a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4333568 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144973 data_alloc: 218103808 data_used: 20872
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.184185982s of 10.652783394s, submitted: 120
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4333568 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 170 ms_handle_reset con 0x56111b870c00 session 0x56111990f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 171 ms_handle_reset con 0x56111b7af000 session 0x56111b594380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 171 ms_handle_reset con 0x561118cfd400 session 0x56111b39d340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4210688 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fc9aa000/0x0/0x4ffc00000, data 0x57bb40/0x67e000, compress 0x0/0x0/0x0, omap 0x1b3e7, meta 0x2bb4c19), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 171 handle_osd_map epochs [171,172], i have 172, src has [1,172]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 172 ms_handle_reset con 0x56111a592800 session 0x5611187a6700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 172 ms_handle_reset con 0x56111ad4d800 session 0x5611187a7880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4218880 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3137536 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 172 ms_handle_reset con 0x56111b870c00 session 0x561119832380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fc9a8000/0x0/0x4ffc00000, data 0x57dbef/0x682000, compress 0x0/0x0/0x0, omap 0x1b770, meta 0x2bb4890), peers [0,1] op hist [1,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fc9a8000/0x0/0x4ffc00000, data 0x57dbef/0x682000, compress 0x0/0x0/0x0, omap 0x1b770, meta 0x2bb4890), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 172 ms_handle_reset con 0x56111a593c00 session 0x5611187a7c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 173 ms_handle_reset con 0x56111aedf800 session 0x561119510c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 173 ms_handle_reset con 0x56111b871c00 session 0x56111b51b500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 2998272 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 174 ms_handle_reset con 0x56111a593c00 session 0x561119832000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159936 data_alloc: 218103808 data_used: 21776
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 174 ms_handle_reset con 0x561118cfd400 session 0x561119549a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 2990080 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 174 ms_handle_reset con 0x56111ad4d800 session 0x56111ac5afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 174 ms_handle_reset con 0x56111a592800 session 0x56111b5956c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 174 ms_handle_reset con 0x561118cfd400 session 0x56111b39d500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 2981888 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 2981888 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fc99f000/0x0/0x4ffc00000, data 0x582b02/0x689000, compress 0x0/0x0/0x0, omap 0x1ca94, meta 0x2bb356c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 2940928 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 2940928 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162133 data_alloc: 218103808 data_used: 22758
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 2940928 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fc99f000/0x0/0x4ffc00000, data 0x582b02/0x689000, compress 0x0/0x0/0x0, omap 0x1ca94, meta 0x2bb356c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 2940928 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 2940928 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 2940928 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.039074898s of 13.478106499s, submitted: 150
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 175 ms_handle_reset con 0x56111a593c00 session 0x56111ac5b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 175 ms_handle_reset con 0x56111aedf800 session 0x561119832fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 2940928 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167901 data_alloc: 218103808 data_used: 22758
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111b870c00 session 0x56111b39cfc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111b871c00 session 0x56111b55f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x561118cfd400 session 0x56111b194e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 2924544 heap: 85770240 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fc99c000/0x0/0x4ffc00000, data 0x5845b9/0x68c000, compress 0x0/0x0/0x0, omap 0x1ce3a, meta 0x2bb31c6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111a592800 session 0x56111b1941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111a593c00 session 0x56111b194fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 491520 heap: 87867392 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fc99c000/0x0/0x4ffc00000, data 0x5845b9/0x68c000, compress 0x0/0x0/0x0, omap 0x1ce3a, meta 0x2bb31c6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 491520 heap: 87867392 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fc99c000/0x0/0x4ffc00000, data 0x5845b9/0x68c000, compress 0x0/0x0/0x0, omap 0x1ce3a, meta 0x2bb31c6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 491520 heap: 87867392 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111aedf800 session 0x561119548700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 491520 heap: 87867392 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177396 data_alloc: 218103808 data_used: 4675830
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 491520 heap: 87867392 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x561118cfd400 session 0x56111ae596c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 491520 heap: 87867392 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111a593c00 session 0x56111ab0bc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111a592800 session 0x56111b55efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111b871c00 session 0x56111b55e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111b79fc00 session 0x56111b194a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 6332416 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fc337000/0x0/0x4ffc00000, data 0xbea639/0xcf5000, compress 0x0/0x0/0x0, omap 0x1ce3a, meta 0x2bb31c6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111a592800 session 0x56111b594c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 6332416 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 176 ms_handle_reset con 0x56111a593c00 session 0x56111990ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.772653580s of 10.014619827s, submitted: 56
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 177 ms_handle_reset con 0x561118cfd400 session 0x56111990ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 6758400 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185059 data_alloc: 218103808 data_used: 4675943
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 178 ms_handle_reset con 0x56111b871c00 session 0x56111981a700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fc999000/0x0/0x4ffc00000, data 0x5861c6/0x691000, compress 0x0/0x0/0x0, omap 0x1d0ee, meta 0x2bb2f12), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 87990272 unmapped: 6750208 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 178 ms_handle_reset con 0x56111b7bc800 session 0x56111b55e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 88129536 unmapped: 6610944 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 179 ms_handle_reset con 0x561118cfd400 session 0x56111990f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 179 ms_handle_reset con 0x56111a592800 session 0x56111a52bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 88129536 unmapped: 6610944 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 180 ms_handle_reset con 0x56111a593c00 session 0x56111b1948c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 180 ms_handle_reset con 0x56111b7bc800 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 5545984 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 180 ms_handle_reset con 0x56111b871c00 session 0x5611187a61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 5537792 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192211 data_alloc: 218103808 data_used: 4676415
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 180 ms_handle_reset con 0x561118cfd400 session 0x561119549c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90251264 unmapped: 4489216 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 180 heartbeat osd_stat(store_statfs(0x4fb7f3000/0x0/0x4ffc00000, data 0x58b533/0x699000, compress 0x0/0x0/0x0, omap 0x1d838, meta 0x3d527c8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90251264 unmapped: 4489216 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90259456 unmapped: 4481024 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 181 ms_handle_reset con 0x56111a592800 session 0x561118d89180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 181 ms_handle_reset con 0x56111a593c00 session 0x56111b51afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90259456 unmapped: 4481024 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.520516396s of 10.667137146s, submitted: 58
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 4440064 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 182 ms_handle_reset con 0x56111b7bc800 session 0x56111b5948c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 182 ms_handle_reset con 0x56111b7bd400 session 0x561118b87dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204728 data_alloc: 218103808 data_used: 4676415
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 182 ms_handle_reset con 0x561118cfd400 session 0x56111b194c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fb7ed000/0x0/0x4ffc00000, data 0x58eb18/0x69f000, compress 0x0/0x0/0x0, omap 0x1e1d8, meta 0x3d51e28), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90046464 unmapped: 4694016 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 182 ms_handle_reset con 0x56111a593c00 session 0x56111b39c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 4997120 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x56111a592800 session 0x56111b594e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x56111b7bc800 session 0x561119510a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89800704 unmapped: 4939776 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89808896 unmapped: 4931584 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x56111b7bd000 session 0x56111ab0b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89808896 unmapped: 4931584 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205132 data_alloc: 218103808 data_used: 4676687
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x561118cfd400 session 0x56111b55f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 4923392 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 heartbeat osd_stat(store_statfs(0x4fb7eb000/0x0/0x4ffc00000, data 0x5906f8/0x6a1000, compress 0x0/0x0/0x0, omap 0x1e7ef, meta 0x3d51811), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 4923392 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x56111a592800 session 0x561119832540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x56111a593c00 session 0x56111990fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x56111b7bc800 session 0x56111b51bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 4923392 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x56111aedf800 session 0x56111a5daa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 ms_handle_reset con 0x561118cfd400 session 0x56111b55e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 4923392 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 4923392 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.560757637s of 10.263559341s, submitted: 61
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208030 data_alloc: 218103808 data_used: 4676687
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 4923392 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 184 ms_handle_reset con 0x56111a592800 session 0x56111ab0ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 4923392 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 184 heartbeat osd_stat(store_statfs(0x4fb7e6000/0x0/0x4ffc00000, data 0x592177/0x6a4000, compress 0x0/0x0/0x0, omap 0x1eb18, meta 0x3d514e8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 89841664 unmapped: 4898816 heap: 94740480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 185 handle_osd_map epochs [186,186], i have 186, src has [1,186]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90808320 unmapped: 7086080 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111a593c00 session 0x56111ab0b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111b7bc800 session 0x56111b51ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111b193400 session 0x56111b39d180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x561118cfd400 session 0x56111981b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111a592800 session 0x56111ab0bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90841088 unmapped: 7053312 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242035 data_alloc: 218103808 data_used: 4676687
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111a593c00 session 0x561119511dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 7069696 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111b7bc800 session 0x56111ae59180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 7069696 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fb48e000/0x0/0x4ffc00000, data 0x8e3926/0x9fa000, compress 0x0/0x0/0x0, omap 0x1f2de, meta 0x3d50d22), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111a592c00 session 0x56111ae59880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 7069696 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111a592800 session 0x56111b55f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x561118cfd400 session 0x56111b55fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 ms_handle_reset con 0x56111a593c00 session 0x56111b51b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90734592 unmapped: 7159808 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 90750976 unmapped: 7143424 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247226 data_alloc: 218103808 data_used: 4783218
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 3309568 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fb48d000/0x0/0x4ffc00000, data 0x8e53a5/0x9fd000, compress 0x0/0x0/0x0, omap 0x1f9cb, meta 0x3d50635), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fb48d000/0x0/0x4ffc00000, data 0x8e53a5/0x9fd000, compress 0x0/0x0/0x0, omap 0x1f9cb, meta 0x3d50635), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 3309568 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 3309568 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fb48d000/0x0/0x4ffc00000, data 0x8e53a5/0x9fd000, compress 0x0/0x0/0x0, omap 0x1f9cb, meta 0x3d50635), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.701133728s of 13.662034988s, submitted: 73
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111aac9800 session 0x56111b55ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94584832 unmapped: 3309568 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94593024 unmapped: 3301376 heap: 97894400 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267666 data_alloc: 218103808 data_used: 7939698
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111aac9400 session 0x561119548a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111aac9400 session 0x56111b595180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x561118cfd400 session 0x56111ab0ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 7430144 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111a592800 session 0x56111b594700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111a593c00 session 0x561118ca6540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fafef000/0x0/0x4ffc00000, data 0xd843b5/0xe9d000, compress 0x0/0x0/0x0, omap 0x1f9cb, meta 0x3d50635), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111aac9800 session 0x5611195496c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 7421952 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fafef000/0x0/0x4ffc00000, data 0xd843b5/0xe9d000, compress 0x0/0x0/0x0, omap 0x1f9cb, meta 0x3d50635), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x561118cfd400 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111aac9800 session 0x56111b39da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94814208 unmapped: 7413760 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 heartbeat osd_stat(store_statfs(0x4faff0000/0x0/0x4ffc00000, data 0xd843a5/0xe9c000, compress 0x0/0x0/0x0, omap 0x1f9cb, meta 0x3d50635), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111a592800 session 0x561119832700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94814208 unmapped: 7413760 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111b7bc800 session 0x56111b51c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111a566000 session 0x5611187a7180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111a592800 session 0x56111a5db880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 93798400 unmapped: 8429568 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 ms_handle_reset con 0x56111b7bc800 session 0x561119511500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261596 data_alloc: 218103808 data_used: 4676706
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fb33e000/0x0/0x4ffc00000, data 0xa363a5/0xb4e000, compress 0x0/0x0/0x0, omap 0x1fd9a, meta 0x3d50266), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 93331456 unmapped: 8896512 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 188 ms_handle_reset con 0x56111aac9400 session 0x56111b39c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 188 ms_handle_reset con 0x56111aac9000 session 0x56111981afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 8626176 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 188 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 188 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 189 ms_handle_reset con 0x56111aac8000 session 0x56111b194380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 93904896 unmapped: 8323072 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 189 ms_handle_reset con 0x56111a593c00 session 0x56111a5dafc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 189 ms_handle_reset con 0x561118cfd400 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 189 ms_handle_reset con 0x56111aac9800 session 0x56111990f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 8282112 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 96370688 unmapped: 5857280 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302119 data_alloc: 234881024 data_used: 9512066
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 96370688 unmapped: 5857280 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 189 heartbeat osd_stat(store_statfs(0x4fb312000/0x0/0x4ffc00000, data 0xa5dfe2/0xb7a000, compress 0x0/0x0/0x0, omap 0x20417, meta 0x3d4fbe9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 189 ms_handle_reset con 0x56111a592800 session 0x561118ca6e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 189 ms_handle_reset con 0x56111aac9000 session 0x56111b195180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.581818581s of 13.080508232s, submitted: 41
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 96387072 unmapped: 5840896 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 96403456 unmapped: 5824512 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 96108544 unmapped: 6119424 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 190 ms_handle_reset con 0x56111a593c00 session 0x56111981b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 7323648 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250215 data_alloc: 218103808 data_used: 4678770
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 190 heartbeat osd_stat(store_statfs(0x4fb7ad000/0x0/0x4ffc00000, data 0x5c0c34/0x6df000, compress 0x0/0x0/0x0, omap 0x207b6, meta 0x3d4f84a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 190 ms_handle_reset con 0x56111aac9400 session 0x56111b55f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94904320 unmapped: 7323648 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 190 ms_handle_reset con 0x56111aac9800 session 0x56111ae59a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 190 ms_handle_reset con 0x56111aac8000 session 0x561118ca6fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 7282688 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94978048 unmapped: 7249920 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 190 handle_osd_map epochs [190,191], i have 191, src has [1,191]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 191 ms_handle_reset con 0x56111aac9000 session 0x56111ae58700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 191 ms_handle_reset con 0x561118cfd400 session 0x56111981b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 191 ms_handle_reset con 0x56111a593c00 session 0x56111b51d500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 7200768 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 191 heartbeat osd_stat(store_statfs(0x4fb7d0000/0x0/0x4ffc00000, data 0x59e2c9/0x6ba000, compress 0x0/0x0/0x0, omap 0x20bcc, meta 0x3d4f434), peers [0,1] op hist [0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7159808 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252158 data_alloc: 218103808 data_used: 4676706
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 7135232 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 192 ms_handle_reset con 0x56111aac9800 session 0x56111b194540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 192 ms_handle_reset con 0x56111aac9400 session 0x56111b51a000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95010816 unmapped: 7217152 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 192 ms_handle_reset con 0x56111aac9400 session 0x56111a5db500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.008672714s of 10.639857292s, submitted: 71
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95010816 unmapped: 7217152 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95010816 unmapped: 7217152 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 192 ms_handle_reset con 0x56111aac9800 session 0x56111b51c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95010816 unmapped: 7217152 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256066 data_alloc: 218103808 data_used: 4676706
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb7ca000/0x0/0x4ffc00000, data 0x5a17e3/0x6c0000, compress 0x0/0x0/0x0, omap 0x21324, meta 0x3d4ecdc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7bc800 session 0x56111b594000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac8400 session 0x56111b51a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9c00 session 0x56111b55f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac8400 session 0x56111b51ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95010816 unmapped: 7217152 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9400 session 0x561119832fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9800 session 0x56111990e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7bc800 session 0x56111990efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111ad4c400 session 0x56111981ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac8400 session 0x561119548e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9400 session 0x561118d88c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9800 session 0x56111b595340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 6791168 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 6791168 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 6791168 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb56b000/0x0/0x4ffc00000, data 0x800803/0x921000, compress 0x0/0x0/0x0, omap 0x21324, meta 0x3d4ecdc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 6791168 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276010 data_alloc: 218103808 data_used: 4676978
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb56b000/0x0/0x4ffc00000, data 0x800803/0x921000, compress 0x0/0x0/0x0, omap 0x21324, meta 0x3d4ecdc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 6791168 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 6782976 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 6782976 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.914413452s of 10.805289268s, submitted: 26
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7aec00 session 0x56111a5daa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7bc000 session 0x56111b7b0a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac8400 session 0x56111b51ca80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9400 session 0x56111b39cc40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7bc800 session 0x56111b39d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 6832128 heap: 102227968 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb547000/0x0/0x4ffc00000, data 0x824803/0x945000, compress 0x0/0x0/0x0, omap 0x21595, meta 0x3d4ea6b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 11173888 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362095 data_alloc: 218103808 data_used: 4676994
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9800 session 0x56111b7b0380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aedf800 session 0x561118ca6a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac8400 session 0x56111b595880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 96206848 unmapped: 21766144 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9400 session 0x56111b51bc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9800 session 0x56111990f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7bc800 session 0x561119549a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 96329728 unmapped: 21643264 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 heartbeat osd_stat(store_statfs(0x4faa98000/0x0/0x4ffc00000, data 0x12d3803/0x13f4000, compress 0x0/0x0/0x0, omap 0x21595, meta 0x3d4ea6b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111a592c00 session 0x56111b51b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 96329728 unmapped: 21643264 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7aec00 session 0x56111b7b1180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7bcc00 session 0x56111b39c000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac8400 session 0x561119548380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111a592c00 session 0x56111b594380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 22233088 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9400 session 0x56111b194000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 23011328 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332326 data_alloc: 218103808 data_used: 4679026
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 heartbeat osd_stat(store_statfs(0x4facd3000/0x0/0x4ffc00000, data 0x1098803/0x11b9000, compress 0x0/0x0/0x0, omap 0x21740, meta 0x3d4e8c0), peers [0,1] op hist [0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7aec00 session 0x56111b7b1180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac9800 session 0x56111b194fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 103284736 unmapped: 14688256 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7bcc00 session 0x56111b39d180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 103284736 unmapped: 14688256 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111a592c00 session 0x561118ca68c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 103292928 unmapped: 14680064 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.487787724s of 10.112561226s, submitted: 46
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac8400 session 0x561119832000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 heartbeat osd_stat(store_statfs(0x4facf7000/0x0/0x4ffc00000, data 0x10747f3/0x1194000, compress 0x0/0x0/0x0, omap 0x21825, meta 0x3d4e7db), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 14811136 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 20652032 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270946 data_alloc: 218103808 data_used: 4679026
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111aac8400 session 0x56111b55f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 20635648 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb7cb000/0x0/0x4ffc00000, data 0x5a17f3/0x6c1000, compress 0x0/0x0/0x0, omap 0x2197d, meta 0x3d4e683), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97337344 unmapped: 20635648 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 ms_handle_reset con 0x56111b7bc800 session 0x56111b7b0fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97353728 unmapped: 20619264 heap: 117972992 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 193 handle_osd_map epochs [193,194], i have 194, src has [1,194]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97386496 unmapped: 28983296 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 194 heartbeat osd_stat(store_statfs(0x4fafc4000/0x0/0x4ffc00000, data 0xda379f/0xec6000, compress 0x0/0x0/0x0, omap 0x21f6b, meta 0x3d4e095), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 194 ms_handle_reset con 0x56111a593c00 session 0x56111b55ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97411072 unmapped: 28958720 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 194 heartbeat osd_stat(store_statfs(0x4fa7c4000/0x0/0x4ffc00000, data 0x15a379f/0x16c6000, compress 0x0/0x0/0x0, omap 0x21fdd, meta 0x3d4e023), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360019 data_alloc: 218103808 data_used: 4676978
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97517568 unmapped: 28852224 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97542144 unmapped: 28827648 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f8fc6000/0x0/0x4ffc00000, data 0x2da379f/0x2ec6000, compress 0x0/0x0/0x0, omap 0x21fdd, meta 0x3d4e023), peers [0,1] op hist [0,0,0,0,0,2])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 97607680 unmapped: 28762112 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.245775223s of 10.463717461s, submitted: 45
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 194 handle_osd_map epochs [194,195], i have 195, src has [1,195]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 107094016 unmapped: 19275776 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 19226624 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1611468 data_alloc: 218103808 data_used: 4676978
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 98746368 unmapped: 27623424 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 195 ms_handle_reset con 0x56111b7af800 session 0x56111b595c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 27598848 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 195 heartbeat osd_stat(store_statfs(0x4f6fc5000/0x0/0x4ffc00000, data 0x4da4f7f/0x4ec7000, compress 0x0/0x0/0x0, omap 0x226f7, meta 0x3d4d909), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 195 heartbeat osd_stat(store_statfs(0x4f6fc5000/0x0/0x4ffc00000, data 0x4da4f7f/0x4ec7000, compress 0x0/0x0/0x0, omap 0x226f7, meta 0x3d4d909), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 98861056 unmapped: 27508736 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 19087360 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 27459584 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1835636 data_alloc: 218103808 data_used: 4676978
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 196 ms_handle_reset con 0x5611195b2800 session 0x56111ae58a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99074048 unmapped: 27295744 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 196 ms_handle_reset con 0x56111b7afc00 session 0x56111b194e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 18890752 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f4fbe000/0x0/0x4ffc00000, data 0x6da6f47/0x6ecc000, compress 0x0/0x0/0x0, omap 0x22bf2, meta 0x3d4d40e), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99246080 unmapped: 27123712 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 196 ms_handle_reset con 0x5611195b2800 session 0x561118ca6e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f3fc0000/0x0/0x4ffc00000, data 0x7da6f47/0x7ecc000, compress 0x0/0x0/0x0, omap 0x22bf2, meta 0x3d4d40e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.416533470s of 10.019760132s, submitted: 56
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99328000 unmapped: 27041792 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99336192 unmapped: 27033600 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f2fc0000/0x0/0x4ffc00000, data 0x8da6f47/0x8ecc000, compress 0x0/0x0/0x0, omap 0x22bf2, meta 0x3d4d40e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1995053 data_alloc: 218103808 data_used: 4676994
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99418112 unmapped: 26951680 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99418112 unmapped: 26951680 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 26959872 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 198 ms_handle_reset con 0x56111a593c00 session 0x56111b51d500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99459072 unmapped: 26910720 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99500032 unmapped: 26869760 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2120551 data_alloc: 218103808 data_used: 4676978
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 198 heartbeat osd_stat(store_statfs(0x4f17bb000/0x0/0x4ffc00000, data 0xa5aa5a6/0xa6d1000, compress 0x0/0x0/0x0, omap 0x233ac, meta 0x3d4cc54), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99500032 unmapped: 26869760 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 198 ms_handle_reset con 0x56111aac8400 session 0x56111a5dba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99598336 unmapped: 26771456 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 198 heartbeat osd_stat(store_statfs(0x4f07bc000/0x0/0x4ffc00000, data 0xb5aa1a6/0xb6d0000, compress 0x0/0x0/0x0, omap 0x2342e, meta 0x3d4cbd2), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99680256 unmapped: 26689536 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.285032749s of 10.180331230s, submitted: 50
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 26632192 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 198 heartbeat osd_stat(store_statfs(0x4ef7bc000/0x0/0x4ffc00000, data 0xc5aa1a6/0xc6d0000, compress 0x0/0x0/0x0, omap 0x2342e, meta 0x3d4cbd2), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 198 ms_handle_reset con 0x56111b7af800 session 0x56111b55e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 198 ms_handle_reset con 0x56111b7bc800 session 0x56111b51ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 26583040 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2329803 data_alloc: 218103808 data_used: 4676994
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 26501120 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 ms_handle_reset con 0x5611195b2800 session 0x561119548e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 26443776 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 26443776 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 26443776 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 heartbeat osd_stat(store_statfs(0x4edfb7000/0x0/0x4ffc00000, data 0xddabc25/0xded3000, compress 0x0/0x0/0x0, omap 0x23760, meta 0x3d4c8a0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 9658368 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2471249 data_alloc: 218103808 data_used: 4676978
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 ms_handle_reset con 0x56111a593c00 session 0x56111b39c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99983360 unmapped: 26386432 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 heartbeat osd_stat(store_statfs(0x4ed7ba000/0x0/0x4ffc00000, data 0xe5abc15/0xe6d2000, compress 0x0/0x0/0x0, omap 0x23760, meta 0x3d4c8a0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 ms_handle_reset con 0x56111aac8400 session 0x56111b5956c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99983360 unmapped: 26386432 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 heartbeat osd_stat(store_statfs(0x4ecfba000/0x0/0x4ffc00000, data 0xedabc15/0xeed2000, compress 0x0/0x0/0x0, omap 0x23760, meta 0x3d4c8a0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100081664 unmapped: 26288128 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.864444256s of 10.385126114s, submitted: 22
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 17842176 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 26099712 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 ms_handle_reset con 0x56111b7afc00 session 0x56111990ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2664597 data_alloc: 218103808 data_used: 4676978
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 heartbeat osd_stat(store_statfs(0x4eb7ba000/0x0/0x4ffc00000, data 0x105abc15/0x106d2000, compress 0x0/0x0/0x0, omap 0x23760, meta 0x3d4c8a0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 ms_handle_reset con 0x5611195b2800 session 0x56111ae59500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100245504 unmapped: 26124288 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 26009600 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 199 handle_osd_map epochs [199,200], i have 200, src has [1,200]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 200 ms_handle_reset con 0x56111a593c00 session 0x56111b195880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100548608 unmapped: 25821184 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 25714688 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 200 ms_handle_reset con 0x56111aac8400 session 0x56111b55efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 200 heartbeat osd_stat(store_statfs(0x4e6fb4000/0x0/0x4ffc00000, data 0x14dad7c1/0x14ed6000, compress 0x0/0x0/0x0, omap 0x23a20, meta 0x3d4c5e0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 200 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 201 heartbeat osd_stat(store_statfs(0x4e67af000/0x0/0x4ffc00000, data 0x155af35d/0x156d9000, compress 0x0/0x0/0x0, omap 0x23b82, meta 0x3d4c47e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100794368 unmapped: 25575424 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086809 data_alloc: 218103808 data_used: 4676994
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 201 heartbeat osd_stat(store_statfs(0x4e67af000/0x0/0x4ffc00000, data 0x155af35d/0x156d9000, compress 0x0/0x0/0x0, omap 0x23b82, meta 0x3d4c47e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 25509888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 202 ms_handle_reset con 0x56111b7bc800 session 0x56111b595340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 25387008 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 16850944 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 202 ms_handle_reset con 0x5611195b3400 session 0x56111981a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 202 heartbeat osd_stat(store_statfs(0x4e1fae000/0x0/0x4ffc00000, data 0x19db0fab/0x19ede000, compress 0x0/0x0/0x0, omap 0x23e45, meta 0x3d4c1bb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.980153561s of 10.006151199s, submitted: 29
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 24862720 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 202 ms_handle_reset con 0x5611195b2800 session 0x56111b7b1dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 202 ms_handle_reset con 0x56111aac8400 session 0x56111b55f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 203 ms_handle_reset con 0x56111a593c00 session 0x56111b55f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 16203776 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 203 ms_handle_reset con 0x56111b7bc800 session 0x56111b1948c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635090 data_alloc: 218103808 data_used: 4676994
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 203 ms_handle_reset con 0x5611195b3800 session 0x56111b39ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 204 ms_handle_reset con 0x5611195b2800 session 0x56111a5db880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 204 ms_handle_reset con 0x56111aac8c00 session 0x561118ca6fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 24485888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 204 ms_handle_reset con 0x5611195b3800 session 0x56111b39d880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 101892096 unmapped: 24477696 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 101892096 unmapped: 24477696 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 205 heartbeat osd_stat(store_statfs(0x4defa8000/0x0/0x4ffc00000, data 0x1cdb482e/0x1cee2000, compress 0x0/0x0/0x0, omap 0x243d1, meta 0x3d4bc2f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 205 ms_handle_reset con 0x56111a593c00 session 0x56111b51b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 24453120 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 205 ms_handle_reset con 0x56111aac8400 session 0x56111b194c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 205 ms_handle_reset con 0x5611195b2800 session 0x561119832fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 205 ms_handle_reset con 0x5611195b3800 session 0x561119548fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 205 ms_handle_reset con 0x56111a593c00 session 0x56111b55ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 206 ms_handle_reset con 0x56111aac8c00 session 0x561119832c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 26632192 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 206 heartbeat osd_stat(store_statfs(0x4fb7a6000/0x0/0x4ffc00000, data 0x5b65b7/0x6e4000, compress 0x0/0x0/0x0, omap 0x2471c, meta 0x3d4b8e4), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387019 data_alloc: 218103808 data_used: 4677591
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 26632192 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 26632192 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 26632192 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 206 heartbeat osd_stat(store_statfs(0x4fb7a1000/0x0/0x4ffc00000, data 0x5b806e/0x6e7000, compress 0x0/0x0/0x0, omap 0x24b4e, meta 0x3d4b4b2), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 26632192 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.704602242s of 10.769770622s, submitted: 142
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 206 ms_handle_reset con 0x56111b7bc800 session 0x56111ac5afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 26583040 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391081 data_alloc: 218103808 data_used: 4677591
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 26583040 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 26583040 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 208 ms_handle_reset con 0x5611195b2800 session 0x56111b594700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 26583040 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 208 ms_handle_reset con 0x5611195b3800 session 0x56111b594540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 208 heartbeat osd_stat(store_statfs(0x4fb79a000/0x0/0x4ffc00000, data 0x5bb709/0x6ee000, compress 0x0/0x0/0x0, omap 0x24fbc, meta 0x3d4b044), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 26583040 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 208 ms_handle_reset con 0x56111a593c00 session 0x56111b195dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 208 heartbeat osd_stat(store_statfs(0x4fb79a000/0x0/0x4ffc00000, data 0x5bb709/0x6ee000, compress 0x0/0x0/0x0, omap 0x24fbc, meta 0x3d4b044), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 208 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fb797000/0x0/0x4ffc00000, data 0x5bd325/0x6f1000, compress 0x0/0x0/0x0, omap 0x2528c, meta 0x3d4ad74), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 26550272 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 210 ms_handle_reset con 0x56111aac8c00 session 0x56111b55ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 210 ms_handle_reset con 0x56111b193400 session 0x561119549dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399567 data_alloc: 218103808 data_used: 4677591
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fb793000/0x0/0x4ffc00000, data 0x5bede8/0x6f3000, compress 0x0/0x0/0x0, omap 0x25642, meta 0x3d4a9be), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.856229782s of 10.038587570s, submitted: 92
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 210 ms_handle_reset con 0x5611195b2800 session 0x56111b7b1880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404871 data_alloc: 218103808 data_used: 4677591
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x5611195b3800 session 0x56111a52a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fb792000/0x0/0x4ffc00000, data 0x5c08d9/0x6f8000, compress 0x0/0x0/0x0, omap 0x25988, meta 0x3d4a678), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fb791000/0x0/0x4ffc00000, data 0x5c08e9/0x6f9000, compress 0x0/0x0/0x0, omap 0x25988, meta 0x3d4a678), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406623 data_alloc: 218103808 data_used: 4677591
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 26533888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fb791000/0x0/0x4ffc00000, data 0x5c08e9/0x6f9000, compress 0x0/0x0/0x0, omap 0x25988, meta 0x3d4a678), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111a593c00 session 0x5611198328c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111aac8c00 session 0x56111b7b0e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 26091520 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111aac8800 session 0x561118b87c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x5611195b2800 session 0x56111ab0bc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x5611195b3800 session 0x56111981ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111aac9000 session 0x56111ab0b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111b4a7c00 session 0x561118b868c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111b4a7800 session 0x56111a52ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111aac9000 session 0x56111ab0b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x5611195b3800 session 0x561119549c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x5611195b2800 session 0x561119549500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111b4a6400 session 0x56111a52a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111b4a7400 session 0x56111b55f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x5611195b2800 session 0x56111ae58000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26116096 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26116096 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497534 data_alloc: 218103808 data_used: 4677591
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26116096 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fa856000/0x0/0x4ffc00000, data 0x14fd8e9/0x1636000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x5611195b3800 session 0x56111b194700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111aac9000 session 0x56111b51ca80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26116096 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100253696 unmapped: 26116096 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111b4a6400 session 0x56111ab0ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.630872726s of 14.121621132s, submitted: 40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111b4a7c00 session 0x56111a52aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 25985024 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111aac9000 session 0x56111b7b1340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 25624576 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509322 data_alloc: 218103808 data_used: 5819863
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 24862720 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0x15278f9/0x1661000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 24862720 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 16801792 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109633536 unmapped: 16736256 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0x15278f9/0x1661000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 16703488 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1589066 data_alloc: 234881024 data_used: 15976407
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 16703488 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 16703488 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111c377400 session 0x56111b595180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111c377800 session 0x56111a52bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 16572416 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0x15278f9/0x1661000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 16408576 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111c377c00 session 0x56111ae58380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.733712196s of 10.744873047s, submitted: 3
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112263168 unmapped: 14106624 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0x15278f9/0x1661000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [0,0,0,0,0,0,0,19])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1626696 data_alloc: 234881024 data_used: 15976407
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 11100160 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4fa0a1000/0x0/0x4ffc00000, data 0x1cb18f9/0x1deb000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,4])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115302400 unmapped: 11067392 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 10698752 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 12189696 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f9eb3000/0x0/0x4ffc00000, data 0x1e978f9/0x1fd1000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 10330112 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1641698 data_alloc: 234881024 data_used: 15995863
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f9eb3000/0x0/0x4ffc00000, data 0x1e978f9/0x1fd1000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 9781248 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f9eb3000/0x0/0x4ffc00000, data 0x1e978f9/0x1fd1000, compress 0x0/0x0/0x0, omap 0x25b08, meta 0x3d4a4f8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 9535488 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111c377000 session 0x56111b51da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111aac9000 session 0x561119549340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 8880128 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117489664 unmapped: 8880128 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f9e06000/0x0/0x4ffc00000, data 0x1f4c8f9/0x2086000, compress 0x0/0x0/0x0, omap 0x25d42, meta 0x3d4a2be), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 ms_handle_reset con 0x56111c377400 session 0x56111ab0afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.216591835s of 10.430870056s, submitted: 117
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117587968 unmapped: 8781824 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1688028 data_alloc: 234881024 data_used: 18152407
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 212 ms_handle_reset con 0x56111c377800 session 0x56111ae588c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 212 heartbeat osd_stat(store_statfs(0x4f9e06000/0x0/0x4ffc00000, data 0x1f4c8f9/0x2086000, compress 0x0/0x0/0x0, omap 0x25f72, meta 0x3d4a08e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 213 ms_handle_reset con 0x56111c377c00 session 0x5611187a6a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118267904 unmapped: 8101888 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 213 ms_handle_reset con 0x56111c377000 session 0x561118d888c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118284288 unmapped: 8085504 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 8077312 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f9c67000/0x0/0x4ffc00000, data 0x20e4093/0x2221000, compress 0x0/0x0/0x0, omap 0x26570, meta 0x3d49a90), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118333440 unmapped: 8036352 heap: 126369792 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 213 ms_handle_reset con 0x56111c377000 session 0x56111b51ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 213 ms_handle_reset con 0x56111aac9000 session 0x561119510380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 213 ms_handle_reset con 0x56111c377800 session 0x56111981a380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 213 handle_osd_map epochs [213,214], i have 214, src has [1,214]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 10665984 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 214 ms_handle_reset con 0x56111c377c00 session 0x561119511180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 214 ms_handle_reset con 0x56111c912c00 session 0x56111b7b1180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1726552 data_alloc: 234881024 data_used: 18152423
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 214 ms_handle_reset con 0x56111c377400 session 0x56111ae58fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 10665984 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f97c7000/0x0/0x4ffc00000, data 0x2584c2f/0x26c3000, compress 0x0/0x0/0x0, omap 0x2684b, meta 0x3d497b5), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 214 ms_handle_reset con 0x56111c377000 session 0x56111ac5afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 11157504 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 215 ms_handle_reset con 0x56111c377c00 session 0x56111b64e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 11149312 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 216 ms_handle_reset con 0x56111c377800 session 0x56111b195c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 216 ms_handle_reset con 0x56111c912c00 session 0x56111b64e1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 216 ms_handle_reset con 0x56111aac9000 session 0x561119548700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 11124736 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 11124736 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.905153275s of 10.284303665s, submitted: 34
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1740346 data_alloc: 234881024 data_used: 18153024
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 ms_handle_reset con 0x56111c377000 session 0x56111b7b0a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 ms_handle_reset con 0x56111c377400 session 0x561118ca6380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118956032 unmapped: 11091968 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118956032 unmapped: 11091968 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 ms_handle_reset con 0x56111c376c00 session 0x561118b86000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 heartbeat osd_stat(store_statfs(0x4f97ba000/0x0/0x4ffc00000, data 0x2589fb9/0x26cd000, compress 0x0/0x0/0x0, omap 0x27183, meta 0x3d48e7d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 11059200 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 ms_handle_reset con 0x56111b4a6400 session 0x56111724ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 ms_handle_reset con 0x56111b4a7800 session 0x56111b594c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 15335424 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 ms_handle_reset con 0x56111b4a6400 session 0x56111b39da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 15335424 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1601008 data_alloc: 234881024 data_used: 11190832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 15335424 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 15335424 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1941fb9/0x1a85000, compress 0x0/0x0/0x0, omap 0x27183, meta 0x3d48e7d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 15335424 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 ms_handle_reset con 0x56111c912800 session 0x56111b51a700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 ms_handle_reset con 0x56111c912400 session 0x56111b55e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 15335424 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 218 ms_handle_reset con 0x56111c377c00 session 0x56111981a000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 218 ms_handle_reset con 0x56111b4a6400 session 0x56111b55e1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 15335424 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.878555298s of 10.165160179s, submitted: 36
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1610929 data_alloc: 234881024 data_used: 11224112
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x194562e/0x1a89000, compress 0x0/0x0/0x0, omap 0x27782, meta 0x3d4887e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x56111aac9000 session 0x56111b594e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114565120 unmapped: 15482880 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x5611195b2800 session 0x56111b51aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x5611195b3800 session 0x56111a52b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114565120 unmapped: 15482880 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x56111c912400 session 0x561118b86c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x5611195b2800 session 0x561118ca6540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 17473536 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x5611195b3800 session 0x56111b51ce00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 17473536 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 17473536 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532700 data_alloc: 234881024 data_used: 9437286
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 heartbeat osd_stat(store_statfs(0x4fb0a5000/0x0/0x4ffc00000, data 0xca261e/0xde5000, compress 0x0/0x0/0x0, omap 0x278ac, meta 0x3d48754), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 heartbeat osd_stat(store_statfs(0x4fb0a5000/0x0/0x4ffc00000, data 0xca261e/0xde5000, compress 0x0/0x0/0x0, omap 0x278ac, meta 0x3d48754), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 17473536 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x56111aac9000 session 0x56111b594000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x56111c912800 session 0x56111b39c1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 heartbeat osd_stat(store_statfs(0x4fb0a5000/0x0/0x4ffc00000, data 0xca261e/0xde5000, compress 0x0/0x0/0x0, omap 0x278ac, meta 0x3d48754), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 17367040 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x56111b4a6400 session 0x56111a5dbc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x56111c913800 session 0x561119832000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x56111b4a6400 session 0x56111b39d500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 17358848 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 ms_handle_reset con 0x5611195b3800 session 0x56111b51b180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 13975552 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 220 ms_handle_reset con 0x56111aac9000 session 0x561119511c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 221 ms_handle_reset con 0x56111c912800 session 0x56111b51ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 221 ms_handle_reset con 0x5611195b2800 session 0x56111b51d340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 12681216 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1644834 data_alloc: 234881024 data_used: 9523286
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.168094635s of 10.646254539s, submitted: 139
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 12533760 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 221 heartbeat osd_stat(store_statfs(0x4fa255000/0x0/0x4ffc00000, data 0x1bc3d6a/0x1c35000, compress 0x0/0x0/0x0, omap 0x281a0, meta 0x3d47e60), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 12533760 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 12533760 heap: 130048000 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 221 ms_handle_reset con 0x56111aac9000 session 0x561119832700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 221 ms_handle_reset con 0x5611195b3800 session 0x56111ab3ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 221 ms_handle_reset con 0x56111c913800 session 0x561118b86fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 222 ms_handle_reset con 0x56111c913c00 session 0x56111b64e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 222 ms_handle_reset con 0x5611195b2800 session 0x56111a5dba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 18300928 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 222 ms_handle_reset con 0x56111b4a6400 session 0x56111b51a380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 18300928 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f9c13000/0x0/0x4ffc00000, data 0x2204906/0x2277000, compress 0x0/0x0/0x0, omap 0x28489, meta 0x3d47b77), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1692436 data_alloc: 234881024 data_used: 9891942
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 18300928 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 223 ms_handle_reset con 0x5611195b3800 session 0x56111b64efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 18276352 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 224 ms_handle_reset con 0x56111aac9000 session 0x56111b51d500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 224 heartbeat osd_stat(store_statfs(0x4fa44f000/0x0/0x4ffc00000, data 0x19420f2/0x19b6000, compress 0x0/0x0/0x0, omap 0x29157, meta 0x3d46ea9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 224 ms_handle_reset con 0x56111c912000 session 0x56111990fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 224 ms_handle_reset con 0x56111b4a7800 session 0x56111b55fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 224 heartbeat osd_stat(store_statfs(0x4fa44f000/0x0/0x4ffc00000, data 0x19420f2/0x19b6000, compress 0x0/0x0/0x0, omap 0x29157, meta 0x3d46ea9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 18243584 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 224 ms_handle_reset con 0x5611195b3800 session 0x56111b39c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 224 ms_handle_reset con 0x56111aac9000 session 0x56111b51a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 225 ms_handle_reset con 0x5611195b2800 session 0x56111b39cfc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114917376 unmapped: 18808832 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 225 heartbeat osd_stat(store_statfs(0x4fa4d3000/0x0/0x4ffc00000, data 0x1942090/0x19b5000, compress 0x0/0x0/0x0, omap 0x2946f, meta 0x3d46b91), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 225 ms_handle_reset con 0x56111b4a6400 session 0x56111a5da700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 225 ms_handle_reset con 0x5611195b2800 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 18915328 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1622726 data_alloc: 234881024 data_used: 9892511
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 225 ms_handle_reset con 0x5611195b3800 session 0x56111a52aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 18915328 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 225 ms_handle_reset con 0x56111aac9000 session 0x56111ab0b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.367106438s of 10.796997070s, submitted: 108
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 225 handle_osd_map epochs [225,226], i have 226, src has [1,226]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 19628032 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 226 ms_handle_reset con 0x56111c913800 session 0x56111b55e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 226 ms_handle_reset con 0x56111b4a7800 session 0x561119832e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 226 ms_handle_reset con 0x5611197df800 session 0x56111b64ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 226 ms_handle_reset con 0x5611197dfc00 session 0x56111b55ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 226 ms_handle_reset con 0x5611195b3800 session 0x56111b594a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 23937024 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 227 heartbeat osd_stat(store_statfs(0x4faa54000/0x0/0x4ffc00000, data 0x13bea75/0x1436000, compress 0x0/0x0/0x0, omap 0x29f57, meta 0x3d460a9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 227 ms_handle_reset con 0x56111aac9000 session 0x56111b51da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109780992 unmapped: 23945216 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 228 ms_handle_reset con 0x56111c913800 session 0x56111a5db880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 228 ms_handle_reset con 0x56111c913800 session 0x56111a5db180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 228 ms_handle_reset con 0x5611195b2800 session 0x561118d88c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 24215552 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514356 data_alloc: 218103808 data_used: 4678932
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 24215552 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 ms_handle_reset con 0x5611195b3800 session 0x56111ab0b180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 heartbeat osd_stat(store_statfs(0x4fb755000/0x0/0x4ffc00000, data 0x5e037b/0x733000, compress 0x0/0x0/0x0, omap 0x2ace7, meta 0x3d45319), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 ms_handle_reset con 0x5611197df800 session 0x561119511340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 24207360 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 ms_handle_reset con 0x5611197dfc00 session 0x561118d88380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 ms_handle_reset con 0x5611195b2800 session 0x56111b7b0000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 24207360 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 ms_handle_reset con 0x5611197df800 session 0x56111b7b08c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 ms_handle_reset con 0x5611195b3800 session 0x561118d89180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 ms_handle_reset con 0x56111aac9000 session 0x561119510c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 24199168 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 ms_handle_reset con 0x56111a4ed400 session 0x5611195496c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 230 ms_handle_reset con 0x5611197df400 session 0x56111a52ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 230 ms_handle_reset con 0x5611195b2800 session 0x56111b39ce00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 230 ms_handle_reset con 0x5611195b3800 session 0x56111b55e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 230 ms_handle_reset con 0x56111c913800 session 0x56111b51ca80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109535232 unmapped: 24190976 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522100 data_alloc: 218103808 data_used: 4680401
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 231 heartbeat osd_stat(store_statfs(0x4fb74f000/0x0/0x4ffc00000, data 0x5e3a88/0x739000, compress 0x0/0x0/0x0, omap 0x2b2a3, meta 0x3d44d5d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109535232 unmapped: 24190976 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.803082466s of 10.179769516s, submitted: 150
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 232 ms_handle_reset con 0x5611197df800 session 0x56111ab3efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 232 ms_handle_reset con 0x5611195b2800 session 0x5611187a7c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109535232 unmapped: 24190976 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 233 ms_handle_reset con 0x5611195b3800 session 0x561118b87dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 24166400 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 233 ms_handle_reset con 0x5611197df400 session 0x56111b195dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 24166400 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 233 ms_handle_reset con 0x56111c913800 session 0x56111b64f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 234 ms_handle_reset con 0x56111aac9000 session 0x56111b64f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 24150016 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1524946 data_alloc: 218103808 data_used: 4678916
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 234 heartbeat osd_stat(store_statfs(0x4fb74f000/0x0/0x4ffc00000, data 0x5e879d/0x73b000, compress 0x0/0x0/0x0, omap 0x2ba8c, meta 0x3d44574), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 24150016 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 234 ms_handle_reset con 0x5611195b2800 session 0x56111b64e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 24150016 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.1 total, 600.0 interval#012Cumulative writes: 9928 writes, 41K keys, 9928 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 9928 writes, 2654 syncs, 3.74 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4116 writes, 16K keys, 4116 commit groups, 1.0 writes per commit group, ingest: 8.76 MB, 0.01 MB/s#012Interval WAL: 4116 writes, 1700 syncs, 2.42 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 235 heartbeat osd_stat(store_statfs(0x4fb74a000/0x0/0x4ffc00000, data 0x5ea371/0x73e000, compress 0x0/0x0/0x0, omap 0x2be13, meta 0x3d441ed), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611197df400 session 0x56111b64f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611195b3800 session 0x56111b7b0540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111aac9000 session 0x56111a52b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532359 data_alloc: 218103808 data_used: 4679529
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111c913800 session 0x56111b55ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.897014618s of 10.060780525s, submitted: 70
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611195b2800 session 0x56111b195a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611197df400 session 0x56111b55f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111aac9000 session 0x56111b1941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111ad4c400 session 0x56111a52a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111c997800 session 0x56111981b180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611195b2800 session 0x56111b51dc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611197df400 session 0x561119510700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111aac9000 session 0x561119832c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111ad4c400 session 0x561118ca61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111c997400 session 0x56111981bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 236 handle_osd_map epochs [236,237], i have 237, src has [1,237]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 237 ms_handle_reset con 0x5611195b3800 session 0x56111a5dafc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 237 heartbeat osd_stat(store_statfs(0x4faf80000/0x0/0x4ffc00000, data 0xdb3a50/0xf0a000, compress 0x0/0x0/0x0, omap 0x2c551, meta 0x3d43aaf), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1591955 data_alloc: 218103808 data_used: 4679829
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110395392 unmapped: 27009024 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 239 ms_handle_reset con 0x5611195b2800 session 0x56111ab3ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 26992640 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 239 heartbeat osd_stat(store_statfs(0x4faf7b000/0x0/0x4ffc00000, data 0xdb54eb/0xf0d000, compress 0x0/0x0/0x0, omap 0x2c72f, meta 0x3d438d1), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596400 data_alloc: 218103808 data_used: 4679829
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 239 ms_handle_reset con 0x5611197df400 session 0x56111b595880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 239 heartbeat osd_stat(store_statfs(0x4faf79000/0x0/0x4ffc00000, data 0xdb6f8d/0xf11000, compress 0x0/0x0/0x0, omap 0x2caad, meta 0x3d43553), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 26992640 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 24870912 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.391992569s of 10.703784943s, submitted: 71
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 239 ms_handle_reset con 0x56111c996c00 session 0x56111ae59180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 24600576 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 240 ms_handle_reset con 0x56111c996000 session 0x56111ae58700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 24510464 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111cfd1000 session 0x56111a5db500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111c997000 session 0x56111b51b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 24510464 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 heartbeat osd_stat(store_statfs(0x4faf72000/0x0/0x4ffc00000, data 0xdba727/0xf18000, compress 0x0/0x0/0x0, omap 0x2d305, meta 0x3d42cfb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1645503 data_alloc: 234881024 data_used: 11610261
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b2800 session 0x56111b51a1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b3800 session 0x56111a5db340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611197df400 session 0x561118ca6a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 24485888 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611197df400 session 0x56111b51a000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b2800 session 0x56111b5941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b3800 session 0x561119549a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111cfd1000 session 0x56111b594000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111c996000 session 0x56111990ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 20439040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b2800 session 0x56111990fc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b3800 session 0x561119832380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611197df400 session 0x56111b7b1c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111c996000 session 0x56111b55f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111cfd1000 session 0x56111990e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c997000 session 0x56111a5dac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 21520384 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b3800 session 0x56111b594540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa948000/0x0/0x4ffc00000, data 0x13e4799/0x1544000, compress 0x0/0x0/0x0, omap 0x2d683, meta 0x3d4297d), peers [0,1] op hist [2])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611197df400 session 0x56111b51b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21504000 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b2800 session 0x56111990e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c996000 session 0x56111ab0ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21504000 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1699566 data_alloc: 234881024 data_used: 11622549
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c996000 session 0x5611195116c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b2800 session 0x56111b39c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 16588800 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b3800 session 0x56111b595340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611197df400 session 0x56111ac5b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c997000 session 0x56111a52afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b2800 session 0x56111b51a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 15155200 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b3800 session 0x56111990f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa241000/0x0/0x4ffc00000, data 0x1adb38a/0x1c3c000, compress 0x0/0x0/0x0, omap 0x2dc52, meta 0x3d423ae), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 14442496 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111cfd1000 session 0x56111b195340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.318803787s of 10.851205826s, submitted: 166
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c996c00 session 0x56111990e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa242000/0x0/0x4ffc00000, data 0x1adb37a/0x1c3b000, compress 0x0/0x0/0x0, omap 0x2dc52, meta 0x3d423ae), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 8503296 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 242 handle_osd_map epochs [242,243], i have 243, src has [1,243]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111b79e000 session 0x56111990f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x5611195b2800 session 0x56111b55ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa242000/0x0/0x4ffc00000, data 0x1adb37a/0x1c3b000, compress 0x0/0x0/0x0, omap 0x2dc52, meta 0x3d423ae), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1791894 data_alloc: 234881024 data_used: 19581589
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x5611195b3800 session 0x56111b55f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1adcf40/0x1c3d000, compress 0x0/0x0/0x0, omap 0x2e575, meta 0x3d41a8b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1793684 data_alloc: 234881024 data_used: 19589781
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c996c00 session 0x56111981a1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1adcf40/0x1c3d000, compress 0x0/0x0/0x0, omap 0x2e575, meta 0x3d41a8b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c557000 session 0x5611187a61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111cfd1000 session 0x56111a5db6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x5611195b2800 session 0x56111b64ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126607360 unmapped: 10797056 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x5611195b3800 session 0x56111b7b0000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1adcf40/0x1c3d000, compress 0x0/0x0/0x0, omap 0x2e575, meta 0x3d41a8b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 10780672 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x224d15a/0x23b5000, compress 0x0/0x0/0x0, omap 0x2e9f9, meta 0x3d41607), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c557000 session 0x56111ae59c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c996c00 session 0x5611187a6380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c556c00 session 0x56111981b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 132186112 unmapped: 5218304 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 244 ms_handle_reset con 0x5611195b3800 session 0x561119510c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.004192352s of 10.381390572s, submitted: 160
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 244 ms_handle_reset con 0x56111c996c00 session 0x56111b51d500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 244 ms_handle_reset con 0x56111c557000 session 0x561118b87dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130138112 unmapped: 7266304 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 244 ms_handle_reset con 0x5611195b2800 session 0x561119832700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f98d3000/0x0/0x4ffc00000, data 0x244fcc0/0x25b9000, compress 0x0/0x0/0x0, omap 0x2ec0e, meta 0x3d413f2), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 7184384 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1896278 data_alloc: 234881024 data_used: 21555861
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x56111c556800 session 0x5611187a7880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x5611195b2800 session 0x56111b7b08c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f98c9000/0x0/0x4ffc00000, data 0x24532f7/0x25bf000, compress 0x0/0x0/0x0, omap 0x2f2a8, meta 0x3d40d58), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 7634944 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x56111a4ec400 session 0x561119833a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f98c9000/0x0/0x4ffc00000, data 0x24532f7/0x25bf000, compress 0x0/0x0/0x0, omap 0x2f2a8, meta 0x3d40d58), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x56111b193400 session 0x561118ca6380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x56111b7af400 session 0x56111ae59dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 129875968 unmapped: 7528448 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 247 ms_handle_reset con 0x56111a592400 session 0x56111b51bc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 247 ms_handle_reset con 0x5611195b2800 session 0x56111b51ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f98cb000/0x0/0x4ffc00000, data 0x2454e11/0x25bf000, compress 0x0/0x0/0x0, omap 0x2f610, meta 0x3d409f0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 129900544 unmapped: 7503872 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x561118cfd400 session 0x56111a52a000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x56111b193400 session 0x56111b39ca80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130072576 unmapped: 7331840 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x56111b7aec00 session 0x56111a52ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x56111a4ec400 session 0x56111b51aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130072576 unmapped: 7331840 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1899906 data_alloc: 234881024 data_used: 21556462
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x561118cfd400 session 0x56111990f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 248 heartbeat osd_stat(store_statfs(0x4f98a4000/0x0/0x4ffc00000, data 0x2478e2a/0x25e3000, compress 0x0/0x0/0x0, omap 0x2f7b8, meta 0x3d40848), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 7315456 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 248 handle_osd_map epochs [248,249], i have 249, src has [1,249]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 7315456 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 249 ms_handle_reset con 0x5611195b2800 session 0x56111b55e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 249 ms_handle_reset con 0x56111b7aec00 session 0x56111b595a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 249 ms_handle_reset con 0x56111b193400 session 0x56111981b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 7290880 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f98a4000/0x0/0x4ffc00000, data 0x247aa1a/0x25e6000, compress 0x0/0x0/0x0, omap 0x2faa3, meta 0x3d4055d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 249 ms_handle_reset con 0x56111b7af400 session 0x56111a5db880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 7290880 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.307342529s of 11.399452209s, submitted: 111
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131178496 unmapped: 6225920 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904215 data_alloc: 234881024 data_used: 21565239
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131178496 unmapped: 6225920 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 250 ms_handle_reset con 0x5611195b2800 session 0x56111ae58e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 6201344 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 250 ms_handle_reset con 0x561118cfd400 session 0x56111b7b16c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 6201344 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f989a000/0x0/0x4ffc00000, data 0x2482c9b/0x25f0000, compress 0x0/0x0/0x0, omap 0x2fc4b, meta 0x3d403b5), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 6201344 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 251 ms_handle_reset con 0x56111b7aec00 session 0x56111b51afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 252 ms_handle_reset con 0x56111b193400 session 0x5611187a6700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 6168576 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1911123 data_alloc: 234881024 data_used: 21565239
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 253 ms_handle_reset con 0x5611197df400 session 0x56111b64e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 253 ms_handle_reset con 0x56111c996000 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 254 ms_handle_reset con 0x56111a592000 session 0x56111ab0bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 127754240 unmapped: 9650176 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 254 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 127770624 unmapped: 9633792 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 254 ms_handle_reset con 0x56111b193400 session 0x56111b7b1880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 255 ms_handle_reset con 0x56111a593c00 session 0x56111b64efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 10125312 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 255 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0x149bf34/0x160e000, compress 0x0/0x0/0x0, omap 0x301dd, meta 0x3d3fe23), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 255 ms_handle_reset con 0x56111aac9000 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 255 ms_handle_reset con 0x56111ad4c400 session 0x561119548a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 255 heartbeat osd_stat(store_statfs(0x4fa877000/0x0/0x4ffc00000, data 0x149eafc/0x1612000, compress 0x0/0x0/0x0, omap 0x30385, meta 0x3d3fc7b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 10125312 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 255 ms_handle_reset con 0x56111b7aec00 session 0x56111b64ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x56111a592000 session 0x561118d89180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x5611195b2800 session 0x56111ae588c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x561118cfd400 session 0x56111b51a380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x56111a592000 session 0x56111b64f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1636915 data_alloc: 218103808 data_used: 4681898
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.910606384s of 11.116304398s, submitted: 128
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x56111ad4c400 session 0x56111b595a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 257 ms_handle_reset con 0x56111b7aec00 session 0x56111a52ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 258 ms_handle_reset con 0x56111a593c00 session 0x56111b39cfc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 258 ms_handle_reset con 0x56111aac9000 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 258 heartbeat osd_stat(store_statfs(0x4fb6ff000/0x0/0x4ffc00000, data 0x612324/0x789000, compress 0x0/0x0/0x0, omap 0x30863, meta 0x3d3f79d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119947264 unmapped: 17457152 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1646575 data_alloc: 218103808 data_used: 4691126
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 17391616 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 260 ms_handle_reset con 0x56111a593c00 session 0x561119548a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 260 ms_handle_reset con 0x56111a592000 session 0x56111ae58380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 17326080 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 17317888 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 262 ms_handle_reset con 0x561118cfd400 session 0x56111a5da700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 262 ms_handle_reset con 0x56111ad4c400 session 0x56111a52aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 17743872 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 262 heartbeat osd_stat(store_statfs(0x4fb6f6000/0x0/0x4ffc00000, data 0x61928a/0x792000, compress 0x0/0x0/0x0, omap 0x30d9b, meta 0x3d3f265), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 263 ms_handle_reset con 0x56111b7aec00 session 0x56111b595500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 17735680 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 263 ms_handle_reset con 0x561118cfd400 session 0x56111a5dba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1656181 data_alloc: 218103808 data_used: 4691584
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 17735680 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.941582203s of 10.156404495s, submitted: 118
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 263 heartbeat osd_stat(store_statfs(0x4fb6f4000/0x0/0x4ffc00000, data 0x61b353/0x796000, compress 0x0/0x0/0x0, omap 0x30f43, meta 0x3d3f0bd), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 17735680 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 17735680 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 17727488 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 264 ms_handle_reset con 0x56111a592000 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 264 ms_handle_reset con 0x56111a593c00 session 0x56111ac5b500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 264 ms_handle_reset con 0x56111aac9000 session 0x5611195496c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 19423232 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1661431 data_alloc: 218103808 data_used: 4691584
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 19423232 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 265 ms_handle_reset con 0x561118cfd400 session 0x561118ca6540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 19415040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 265 heartbeat osd_stat(store_statfs(0x4fb6ef000/0x0/0x4ffc00000, data 0x61e768/0x79d000, compress 0x0/0x0/0x0, omap 0x3067f, meta 0x3d3f981), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 19415040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 19415040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 19415040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1665239 data_alloc: 218103808 data_used: 4691584
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 265 ms_handle_reset con 0x56111a593c00 session 0x56111b7b1180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 19406848 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.216315269s of 10.641911507s, submitted: 107
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 266 ms_handle_reset con 0x56111b7aec00 session 0x561119832e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 19398656 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 266 ms_handle_reset con 0x56111b193400 session 0x56111b55e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 266 heartbeat osd_stat(store_statfs(0x4fb6e9000/0x0/0x4ffc00000, data 0x62039e/0x7a1000, compress 0x0/0x0/0x0, omap 0x30817, meta 0x3d3f7e9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 267 ms_handle_reset con 0x56111ad4c400 session 0x56111b51d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 267 ms_handle_reset con 0x56111a592000 session 0x56111b55e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 267 ms_handle_reset con 0x56111c996000 session 0x56111990e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19341312 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 19333120 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x561118cfd400 session 0x56111981a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 heartbeat osd_stat(store_statfs(0x4fb6e3000/0x0/0x4ffc00000, data 0x621f72/0x7a4000, compress 0x0/0x0/0x0, omap 0x3066f, meta 0x3d3f991), peers [0,1] op hist [1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x56111a593c00 session 0x56111b5956c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118112256 unmapped: 19292160 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673797 data_alloc: 218103808 data_used: 4691584
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 heartbeat osd_stat(store_statfs(0x4fb6e5000/0x0/0x4ffc00000, data 0x623b60/0x7a5000, compress 0x0/0x0/0x0, omap 0x2f9d7, meta 0x3d40629), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118112256 unmapped: 19292160 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x56111b193400 session 0x56111981b500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 16556032 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x561118cfd400 session 0x56111b51ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x56111a592000 session 0x56111a5da000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 27934720 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x56111c996000 session 0x56111a52ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 27901952 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 269 ms_handle_reset con 0x56111a593c00 session 0x56111ab0b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 269 ms_handle_reset con 0x56111b334000 session 0x56111b39c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 27844608 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1732110 data_alloc: 218103808 data_used: 4692253
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 269 ms_handle_reset con 0x56111b334000 session 0x56111b55fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 270 ms_handle_reset con 0x56111b7aec00 session 0x56111724ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 270 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 270 heartbeat osd_stat(store_statfs(0x4faf0c000/0x0/0x4ffc00000, data 0xdfa728/0xf7e000, compress 0x0/0x0/0x0, omap 0x2ff96, meta 0x3d4006a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 270 ms_handle_reset con 0x56111a592000 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 27844608 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111a593c00 session 0x56111a5dae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x561118cfd400 session 0x56111b55e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 27836416 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.808876991s of 11.294489861s, submitted: 125
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111a592000 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 27836416 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 271 heartbeat osd_stat(store_statfs(0x4faf00000/0x0/0x4ffc00000, data 0xdfdfc4/0xf88000, compress 0x0/0x0/0x0, omap 0x303ca, meta 0x3d3fc36), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 27836416 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111b7aec00 session 0x56111a52a1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111c996000 session 0x56111ab3ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111b334400 session 0x561119511880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x561118cfd400 session 0x561118ca6e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111c996000 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111a592000 session 0x56111b39cc40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b335000 session 0x56111b51d340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b334c00 session 0x56111b39ce00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b7aec00 session 0x56111b55f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b334000 session 0x56111a5da700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x561118cfd400 session 0x56111981b500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111a592000 session 0x56111981a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b335000 session 0x561119832e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111c996000 session 0x56111b55f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 26796032 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1789065 data_alloc: 218103808 data_used: 4692968
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x56111a592000 session 0x56111b64f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 26796032 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x56111b334000 session 0x56111b51c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x56111b334800 session 0x561119548a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x561118cfd400 session 0x561118ca6540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x561118cfd400 session 0x56111a52afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 26796032 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x56111a592000 session 0x56111b39c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 26779648 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 heartbeat osd_stat(store_statfs(0x4fa9ee000/0x0/0x4ffc00000, data 0x130e3be/0x149c000, compress 0x0/0x0/0x0, omap 0x305e4, meta 0x3d3fa1c), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 273 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 274 ms_handle_reset con 0x56111b334000 session 0x56111a52aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 274 ms_handle_reset con 0x56111b334800 session 0x56111b7b1340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 26779648 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1794651 data_alloc: 218103808 data_used: 4694833
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 275 ms_handle_reset con 0x56111c996000 session 0x56111b51d180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 25731072 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 275 ms_handle_reset con 0x561118cfd400 session 0x56111ae58380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 25731072 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 275 heartbeat osd_stat(store_statfs(0x4fa9e9000/0x0/0x4ffc00000, data 0x130fe8c/0x14a1000, compress 0x0/0x0/0x0, omap 0x307c5, meta 0x3d3f83b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 26140672 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.466961861s of 10.023332596s, submitted: 90
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 26009600 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 26009600 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 276 ms_handle_reset con 0x56111b7aec00 session 0x56111b39d180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1833274 data_alloc: 234881024 data_used: 9608086
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 277 ms_handle_reset con 0x56111b335400 session 0x56111724ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 277 ms_handle_reset con 0x56111a592000 session 0x56111b64ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 26001408 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 26001408 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 277 heartbeat osd_stat(store_statfs(0x4fa9e2000/0x0/0x4ffc00000, data 0x1313523/0x14a6000, compress 0x0/0x0/0x0, omap 0x30a2c, meta 0x3d3f5d4), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 277 ms_handle_reset con 0x56111b335800 session 0x56111b7b1c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 277 heartbeat osd_stat(store_statfs(0x4fa9e1000/0x0/0x4ffc00000, data 0x1313533/0x14a7000, compress 0x0/0x0/0x0, omap 0x30a2c, meta 0x3d3f5d4), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1836723 data_alloc: 234881024 data_used: 9608086
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 278 ms_handle_reset con 0x561118cfd400 session 0x56111b64ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 278 ms_handle_reset con 0x56111b335c00 session 0x56111ae59c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 278 ms_handle_reset con 0x56111a592000 session 0x56111b51a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 278 ms_handle_reset con 0x56111b335400 session 0x56111b594700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 21643264 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 278 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 279 ms_handle_reset con 0x56111b7aec00 session 0x561118b87a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.723967552s of 10.265779495s, submitted: 127
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 279 heartbeat osd_stat(store_statfs(0x4fa3b1000/0x0/0x4ffc00000, data 0x193f0c1/0x1ad3000, compress 0x0/0x0/0x0, omap 0x30d74, meta 0x3d3f28c), peers [0,1] op hist [0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 21372928 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 279 heartbeat osd_stat(store_statfs(0x4fa264000/0x0/0x4ffc00000, data 0x1a83cb1/0x1c18000, compress 0x0/0x0/0x0, omap 0x30dfc, meta 0x3d3f204), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 279 ms_handle_reset con 0x56111b335800 session 0x561118ca68c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 19808256 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1891109 data_alloc: 234881024 data_used: 10620151
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 279 ms_handle_reset con 0x56111a592000 session 0x5611195116c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 279 ms_handle_reset con 0x561118cfd400 session 0x56111b39c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 21446656 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124887040 unmapped: 20987904 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 280 ms_handle_reset con 0x56111b335c00 session 0x561118b876c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 281 ms_handle_reset con 0x56111b7ad400 session 0x56111b7b0a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 20971520 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 282 ms_handle_reset con 0x56111b335400 session 0x56111a52afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 282 heartbeat osd_stat(store_statfs(0x4fa351000/0x0/0x4ffc00000, data 0x19a3a52/0x1b39000, compress 0x0/0x0/0x0, omap 0x311be, meta 0x3d3ee42), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 282 ms_handle_reset con 0x561118cfd400 session 0x56111990efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 20971520 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 20971520 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1897339 data_alloc: 234881024 data_used: 10706766
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 20971520 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 283 ms_handle_reset con 0x56111a592000 session 0x56111981a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 20840448 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 20701184 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 20701184 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 284 heartbeat osd_stat(store_statfs(0x4fa326000/0x0/0x4ffc00000, data 0x19cb123/0x1b64000, compress 0x0/0x0/0x0, omap 0x313d8, meta 0x3d3ec28), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 284 ms_handle_reset con 0x56111b335800 session 0x56111990ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.474483013s of 11.017654419s, submitted: 121
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 20963328 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 285 ms_handle_reset con 0x56111b335c00 session 0x56111a5dba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1905927 data_alloc: 234881024 data_used: 10706766
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 19914752 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 19914752 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 286 heartbeat osd_stat(store_statfs(0x4fa31f000/0x0/0x4ffc00000, data 0x19ce75c/0x1b69000, compress 0x0/0x0/0x0, omap 0x315f2, meta 0x3d3ea0e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 19849216 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 19849216 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 286 ms_handle_reset con 0x561118cfd400 session 0x56111b39c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 19849216 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1911086 data_alloc: 234881024 data_used: 10706766
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 19849216 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 286 ms_handle_reset con 0x56111b335400 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 286 handle_osd_map epochs [286,287], i have 287, src has [1,287]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 287 ms_handle_reset con 0x56111a592000 session 0x56111b51a700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 287 heartbeat osd_stat(store_statfs(0x4fa304000/0x0/0x4ffc00000, data 0x19e9324/0x1b86000, compress 0x0/0x0/0x0, omap 0x3179a, meta 0x3d3e866), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 19841024 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 287 ms_handle_reset con 0x56111b7af800 session 0x56111ae58000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 19841024 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 287 ms_handle_reset con 0x56111aac9c00 session 0x56111b55fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x56111aac9400 session 0x56111b7b01c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x56111b335800 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 21389312 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x56111a592000 session 0x56111a52ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x56111b335400 session 0x561119548a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.751646996s of 10.096765518s, submitted: 118
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 289 ms_handle_reset con 0x561118cfd400 session 0x56111b39ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 289 ms_handle_reset con 0x56111aac9400 session 0x56111ae59340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 37920768 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111b335800 session 0x56111990e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111a592000 session 0x561119511c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2105559 data_alloc: 234881024 data_used: 17098094
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 37863424 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111b7af800 session 0x56111b64e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111a592000 session 0x56111b7b0fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x561118cfd400 session 0x561119511880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111aac9400 session 0x56111b51bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 heartbeat osd_stat(store_statfs(0x4f8895000/0x0/0x4ffc00000, data 0x3450a38/0x35f2000, compress 0x0/0x0/0x0, omap 0x31be4, meta 0x3d3e41c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111b335800 session 0x56111b55e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133234688 unmapped: 38125568 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111aac8000 session 0x56111b64fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111af3c400 session 0x56111b64e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133234688 unmapped: 38125568 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 38084608 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 40353792 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 291 ms_handle_reset con 0x561118cfd400 session 0x56111b595a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2104000 data_alloc: 234881024 data_used: 17099004
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111a592000 session 0x56111b7b0700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 heartbeat osd_stat(store_statfs(0x4f8894000/0x0/0x4ffc00000, data 0x345266e/0x35f6000, compress 0x0/0x0/0x0, omap 0x32024, meta 0x3d3dfdc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111aac9400 session 0x561119510380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111b335800 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x561118cfd400 session 0x56111a52b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111a592000 session 0x56111b7b0c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111aac9400 session 0x56111b39cfc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111af3d000 session 0x561119511c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111af3c400 session 0x56111b55f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x561118cfd400 session 0x5611187a6000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111a592000 session 0x56111b51cfc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111aac9400 session 0x56111ae59a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111af3c400 session 0x561118ca6fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 293 ms_handle_reset con 0x56111af3d000 session 0x56111b64efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 293 ms_handle_reset con 0x56111af3d400 session 0x561118b87a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 293 ms_handle_reset con 0x561118cfd400 session 0x56111b39c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 293 heartbeat osd_stat(store_statfs(0x4f888b000/0x0/0x4ffc00000, data 0x3455ed0/0x35fe000, compress 0x0/0x0/0x0, omap 0x32478, meta 0x3d3db88), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2116506 data_alloc: 234881024 data_used: 17099589
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.974984169s of 11.310605049s, submitted: 105
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 294 ms_handle_reset con 0x56111a592000 session 0x56111b55ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131350528 unmapped: 40009728 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 294 ms_handle_reset con 0x56111aac9400 session 0x56111b51a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 39747584 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 37732352 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 294 ms_handle_reset con 0x56111af3c400 session 0x561119511340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 295 ms_handle_reset con 0x561118cfd400 session 0x5611187a7180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 27664384 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 295 ms_handle_reset con 0x56111a592000 session 0x56111b39c1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2201724 data_alloc: 251658240 data_used: 29287354
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 296 ms_handle_reset con 0x56111aac9400 session 0x56111990ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 296 ms_handle_reset con 0x56111af3d400 session 0x56111a5da000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 27648000 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 296 heartbeat osd_stat(store_statfs(0x4f885f000/0x0/0x4ffc00000, data 0x347efda/0x362b000, compress 0x0/0x0/0x0, omap 0x31fcc, meta 0x3d3e034), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 27615232 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111aedf800 session 0x56111a52afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 27557888 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x561118cfd400 session 0x56111b64ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111a592000 session 0x56111a52a1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 27557888 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111aac9400 session 0x56111f8628c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f885a000/0x0/0x4ffc00000, data 0x3480b68/0x362d000, compress 0x0/0x0/0x0, omap 0x32174, meta 0x3d3de8c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f885a000/0x0/0x4ffc00000, data 0x3480b68/0x362d000, compress 0x0/0x0/0x0, omap 0x32174, meta 0x3d3de8c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 27557888 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111af3d400 session 0x561118d896c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111a4e8800 session 0x56111ab0b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2211778 data_alloc: 251658240 data_used: 29279747
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 144859136 unmapped: 26501120 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.092938423s of 11.034484863s, submitted: 80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 144891904 unmapped: 26468352 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 299 ms_handle_reset con 0x56111a4e8800 session 0x56111f863340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152649728 unmapped: 18710528 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 299 ms_handle_reset con 0x561118cfd400 session 0x56111f863180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155697152 unmapped: 15663104 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 299 ms_handle_reset con 0x56111a592000 session 0x56111b51da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111aac9400 session 0x561118b86c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152633344 unmapped: 18726912 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 heartbeat osd_stat(store_statfs(0x4f7e14000/0x0/0x4ffc00000, data 0x3ec71f5/0x4078000, compress 0x0/0x0/0x0, omap 0x3290b, meta 0x3d3d6f5), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111af3d400 session 0x56111a52bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x561118cfd400 session 0x561118d88c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111a4e8800 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2325716 data_alloc: 251658240 data_used: 36201747
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152690688 unmapped: 18669568 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111a592000 session 0x56111ab0bc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111aac9400 session 0x56111ae59880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x5611197c0800 session 0x56111b39cc40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152854528 unmapped: 18505728 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 18497536 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x561118cfd400 session 0x56111f863880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 301 ms_handle_reset con 0x56111a4e8800 session 0x56111a5dae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153157632 unmapped: 18202624 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 301 ms_handle_reset con 0x56111a592000 session 0x56111b1941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 18169856 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 302 ms_handle_reset con 0x56111aac9400 session 0x56111b195a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2287262 data_alloc: 251658240 data_used: 36603155
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153214976 unmapped: 18145280 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 302 ms_handle_reset con 0x56111af3c000 session 0x56111ab3ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x379a51b/0x394d000, compress 0x0/0x0/0x0, omap 0x3374b, meta 0x3d3c8b5), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 302 ms_handle_reset con 0x561118cfd400 session 0x56111990f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 18128896 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x379a51b/0x394d000, compress 0x0/0x0/0x0, omap 0x3395e, meta 0x3d3c6a2), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.366439819s of 10.909622192s, submitted: 157
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 302 ms_handle_reset con 0x56111a4e8800 session 0x5611198328c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 18128896 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 18128896 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 18128896 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x379a51b/0x394d000, compress 0x0/0x0/0x0, omap 0x3398b, meta 0x3d3c675), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 303 ms_handle_reset con 0x56111a592000 session 0x56111b51ca80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2295211 data_alloc: 251658240 data_used: 36604353
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153255936 unmapped: 18104320 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 304 ms_handle_reset con 0x56111aac9400 session 0x56111b7b0700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153255936 unmapped: 18104320 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153272320 unmapped: 18087936 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 305 ms_handle_reset con 0x56111af3c000 session 0x56111990ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f8539000/0x0/0x4ffc00000, data 0x379db6e/0x3953000, compress 0x0/0x0/0x0, omap 0x33f67, meta 0x3d3c099), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153288704 unmapped: 18071552 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 305 ms_handle_reset con 0x561118cfd400 session 0x56111a52ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153296896 unmapped: 18063360 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2300551 data_alloc: 251658240 data_used: 36605210
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 306 ms_handle_reset con 0x56111a4e8800 session 0x561118d88380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f8536000/0x0/0x4ffc00000, data 0x379f75e/0x3956000, compress 0x0/0x0/0x0, omap 0x341c0, meta 0x3d3be40), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 306 ms_handle_reset con 0x56111a592000 session 0x56111b64e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f8531000/0x0/0x4ffc00000, data 0x37a11f9/0x3959000, compress 0x0/0x0/0x0, omap 0x34449, meta 0x3d3bbb7), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 306 handle_osd_map epochs [306,307], i have 307, src has [1,307]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.094177246s of 12.924662590s, submitted: 74
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2303321 data_alloc: 251658240 data_used: 36605823
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153370624 unmapped: 17989632 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 307 ms_handle_reset con 0x56111aac9400 session 0x561119511880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153370624 unmapped: 17989632 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 307 ms_handle_reset con 0x56111af3c000 session 0x56111990e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 17915904 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 308 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 17899520 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 308 heartbeat osd_stat(store_statfs(0x4f8529000/0x0/0x4ffc00000, data 0x37a4969/0x395f000, compress 0x0/0x0/0x0, omap 0x3480f, meta 0x3d3b7f1), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 308 handle_osd_map epochs [309,309], i have 309, src has [1,309]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 157671424 unmapped: 13688832 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2513509 data_alloc: 251658240 data_used: 36614015
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162971648 unmapped: 41992192 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f4d2a000/0x0/0x4ffc00000, data 0x6fa6559/0x7162000, compress 0x0/0x0/0x0, omap 0x34895, meta 0x3d3b76b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155738112 unmapped: 49225728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 309 ms_handle_reset con 0x56111aac9400 session 0x56111b39c8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 160006144 unmapped: 44957696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 310 ms_handle_reset con 0x5611213e4000 session 0x56111b55fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 157048832 unmapped: 47915008 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 310 ms_handle_reset con 0x5611213e4400 session 0x561119833a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 161521664 unmapped: 43442176 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.858100891s of 10.006391525s, submitted: 77
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x5611213e4800 session 0x561119549c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3310169 data_alloc: 251658240 data_used: 38412788
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 161546240 unmapped: 43417600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111af3dc00 session 0x56111b51a700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111af3d800 session 0x56111b7b0000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 43343872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 heartbeat osd_stat(store_statfs(0x4eb91c000/0x0/0x4ffc00000, data 0x103aed21/0x1056e000, compress 0x0/0x0/0x0, omap 0x34a3d, meta 0x3d3b5c3), peers [0,1] op hist [0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x561118cfd400 session 0x56111b51a380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111aac9400 session 0x56111b39ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x5611213e4000 session 0x56111b55efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 161759232 unmapped: 43204608 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166166528 unmapped: 38797312 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 heartbeat osd_stat(store_statfs(0x4e805a000/0x0/0x4ffc00000, data 0x13c74c9f/0x13e31000, compress 0x0/0x0/0x0, omap 0x34139, meta 0x3d3bec7), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111a4e8800 session 0x561118b86540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111a592000 session 0x56111b64e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 46972928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970671 data_alloc: 251658240 data_used: 38283634
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158072832 unmapped: 46891008 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 312 ms_handle_reset con 0x561118cfd400 session 0x561118b86c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158162944 unmapped: 46800896 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 312 ms_handle_reset con 0x56111aac9400 session 0x56111ab0ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 312 ms_handle_reset con 0x56111af3d800 session 0x56111b55fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145809408 unmapped: 59154432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 313 ms_handle_reset con 0x561118cfd400 session 0x56111b64e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 313 ms_handle_reset con 0x56111a4e8800 session 0x56111b7b01c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 314 ms_handle_reset con 0x56111a592000 session 0x56111990fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 59056128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 59056128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f9110000/0x0/0x4ffc00000, data 0x1a1b1f2/0x1bd9000, compress 0x0/0x0/0x0, omap 0x34595, meta 0x4edba6b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2136432 data_alloc: 234881024 data_used: 17100130
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.281840801s of 10.235033989s, submitted: 247
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x56111aac9400 session 0x56111b39c8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 59056128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x56111af3d800 session 0x5611187a7880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 59056128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x56111b334000 session 0x56111b64f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x56111b334800 session 0x56111b7b0e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x561118cfd400 session 0x56111f862000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 70467584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 70467584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f9ce4000/0x0/0x4ffc00000, data 0xe49c34/0x1006000, compress 0x0/0x0/0x0, omap 0x34595, meta 0x4edba6b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 70467584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a4e8800 session 0x56111f863340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2025554 data_alloc: 218103808 data_used: 4718689
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a592000 session 0x561119549500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x561118cfd400 session 0x56111f8636c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9c60000/0x0/0x4ffc00000, data 0xecb72b/0x1089000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2025062 data_alloc: 218103808 data_used: 4718689
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.772894859s of 12.040178299s, submitted: 91
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a4e8800 session 0x56111b194700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334000 session 0x56111ab3ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334800 session 0x561119549a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111aac9400 session 0x56111f862380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x561118cfd400 session 0x56111b64fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a4e8800 session 0x56111b55f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9c60000/0x0/0x4ffc00000, data 0xecb72b/0x1089000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334000 session 0x56111ae59880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334800 session 0x561119832540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111af3dc00 session 0x561118d896c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2078588 data_alloc: 218103808 data_used: 4722687
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x561118cfd400 session 0x56111990f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a4e8800 session 0x56111ae58fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334000 session 0x56111a52ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334800 session 0x561118ca6e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2078588 data_alloc: 218103808 data_used: 4722687
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128124 data_alloc: 234881024 data_used: 13115391
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128124 data_alloc: 234881024 data_used: 13115391
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.599102020s of 18.728061676s, submitted: 15
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [0,0,0,0,5])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 62824448 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8ad7000/0x0/0x4ffc00000, data 0x205772b/0x2215000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 62824448 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2189224 data_alloc: 234881024 data_used: 13430783
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a58000/0x0/0x4ffc00000, data 0x20d672b/0x2294000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a58000/0x0/0x4ffc00000, data 0x20d672b/0x2294000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a58000/0x0/0x4ffc00000, data 0x20d672b/0x2294000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a58000/0x0/0x4ffc00000, data 0x20d672b/0x2294000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 63455232 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2186184 data_alloc: 234881024 data_used: 13430783
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 63455232 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 63455232 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 63455232 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.351999283s of 12.158617020s, submitted: 76
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a36000/0x0/0x4ffc00000, data 0x20f872b/0x22b6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 63193088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 317 ms_handle_reset con 0x5611213e5000 session 0x56111b55f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 317 ms_handle_reset con 0x5611213e4c00 session 0x56111b64fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 63193088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 318 ms_handle_reset con 0x561118cfd400 session 0x56111b594fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2193276 data_alloc: 234881024 data_used: 13430783
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141787136 unmapped: 63176704 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 318 heartbeat osd_stat(store_statfs(0x4f8a27000/0x0/0x4ffc00000, data 0x2100e63/0x22c1000, compress 0x0/0x0/0x0, omap 0x348e5, meta 0x4edb71b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141787136 unmapped: 63176704 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 319 ms_handle_reset con 0x56111a4e8800 session 0x56111990fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 64585728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 319 ms_handle_reset con 0x56111b334000 session 0x56111ab0b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 64585728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 320 ms_handle_reset con 0x56111b334800 session 0x56111b194c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 320 ms_handle_reset con 0x56111b334800 session 0x56111f863a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 64577536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 321 ms_handle_reset con 0x561118cfd400 session 0x56111b595dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 321 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0x21045ef/0x22c7000, compress 0x0/0x0/0x0, omap 0x34a8d, meta 0x4edb573), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2201294 data_alloc: 234881024 data_used: 13430881
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 64577536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 321 ms_handle_reset con 0x56111a4e8800 session 0x56111ae58000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 64569344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 322 ms_handle_reset con 0x56111b334000 session 0x56111b55ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 64520192 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 64520192 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 322 ms_handle_reset con 0x5611213e4400 session 0x56111a5db180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 322 ms_handle_reset con 0x5611213e4800 session 0x56111b64ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.330351830s of 11.612763405s, submitted: 56
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133832704 unmapped: 71131136 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f8a1a000/0x0/0x4ffc00000, data 0x210addf/0x22d0000, compress 0x0/0x0/0x0, omap 0x34c35, meta 0x4edb3cb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 322 ms_handle_reset con 0x561118cfd400 session 0x561118d89dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2053200 data_alloc: 218103808 data_used: 4722687
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133840896 unmapped: 71122944 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 323 ms_handle_reset con 0x56111a4e8800 session 0x56111990f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 70819840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 70819840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 70819840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f9c28000/0x0/0x4ffc00000, data 0xefb85e/0x10c2000, compress 0x0/0x0/0x0, omap 0x350d1, meta 0x4edaf2f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 70819840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2055272 data_alloc: 218103808 data_used: 4722785
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 323 ms_handle_reset con 0x56111b334800 session 0x56111ac5afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f9c2a000/0x0/0x4ffc00000, data 0xefb85e/0x10c2000, compress 0x0/0x0/0x0, omap 0x350d1, meta 0x4edaf2f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 324 ms_handle_reset con 0x5611213e4400 session 0x561118ca6540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 324 ms_handle_reset con 0x561118cfd400 session 0x56111b55f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 324 ms_handle_reset con 0x56111a4e8800 session 0x56111b7b01c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 325 ms_handle_reset con 0x56111b334800 session 0x56111a52ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 325 ms_handle_reset con 0x5611213e4400 session 0x56111b195a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 325 handle_osd_map epochs [325,326], i have 326, src has [1,326]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949024200s of 10.366166115s, submitted: 44
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f9c1f000/0x0/0x4ffc00000, data 0xe7ef88/0x1047000, compress 0x0/0x0/0x0, omap 0x35279, meta 0x4edad87), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 326 ms_handle_reset con 0x56111b334000 session 0x56111ae59340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2061850 data_alloc: 218103808 data_used: 4724735
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 326 ms_handle_reset con 0x561118cfd400 session 0x56111b594000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f9cc4000/0x0/0x4ffc00000, data 0xe5cb40/0x1026000, compress 0x0/0x0/0x0, omap 0x35421, meta 0x4edabdf), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2062426 data_alloc: 218103808 data_used: 4726685
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111a4e8800 session 0x56111ae59c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111b334800 session 0x56111b39c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4400 session 0x56111b7b0e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4800 session 0x56111b51ce00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x561118cfd400 session 0x56111f863340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111a4e8800 session 0x5611187a7c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 70967296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111b334800 session 0x5611187a7880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4400 session 0x56111981b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 70967296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f938c000/0x0/0x4ffc00000, data 0x1791631/0x195e000, compress 0x0/0x0/0x0, omap 0x34aa7, meta 0x4edb559), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4c00 session 0x56111ab0bc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 70967296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f938c000/0x0/0x4ffc00000, data 0x1791631/0x195e000, compress 0x0/0x0/0x0, omap 0x34aa7, meta 0x4edb559), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 70959104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x561118cfd400 session 0x56111b64f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2126122 data_alloc: 218103808 data_used: 4726685
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f938b000/0x0/0x4ffc00000, data 0x1791641/0x195f000, compress 0x0/0x0/0x0, omap 0x34aa7, meta 0x4edb559), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 70959104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111b334800 session 0x561119549a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111a4e8800 session 0x56111b39c8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.922699928s of 11.104353905s, submitted: 59
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4400 session 0x56111a5dae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e5400 session 0x56111b51c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111a4e8800 session 0x56111ae59c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111b334800 session 0x56111ab3ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 70959104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134012928 unmapped: 70950912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f8e83000/0x0/0x4ffc00000, data 0x1c9b641/0x1e69000, compress 0x0/0x0/0x0, omap 0x3425f, meta 0x4edbda1), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x5611213e5c00 session 0x56111b51ca80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 68509696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 68509696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2210415 data_alloc: 234881024 data_used: 13160349
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 68509696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f8e7e000/0x0/0x4ffc00000, data 0x1c9d1dd/0x1e6c000, compress 0x0/0x0/0x0, omap 0x34407, meta 0x4edbbf9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 68476928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 68476928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc400 session 0x561118d88380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc400 session 0x56111b39c000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 63905792 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x561118cfd400 session 0x56111ab3ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc800 session 0x56111ac5b500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bd400 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bd800 session 0x56111981b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 66781184 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2473564 data_alloc: 234881024 data_used: 13160381
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f5e3e000/0x0/0x4ffc00000, data 0x4cdd24f/0x4eae000, compress 0x0/0x0/0x0, omap 0x34407, meta 0x4edbbf9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 66781184 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 66781184 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f5e3e000/0x0/0x4ffc00000, data 0x4cdd24f/0x4eae000, compress 0x0/0x0/0x0, omap 0x34407, meta 0x4edbbf9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.840569496s of 11.517079353s, submitted: 68
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141402112 unmapped: 63561728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 62365696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 61251584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2537594 data_alloc: 234881024 data_used: 14315453
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 61251584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 61251584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f550c000/0x0/0x4ffc00000, data 0x560e24f/0x57df000, compress 0x0/0x0/0x0, omap 0x34407, meta 0x4edbbf9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 147087360 unmapped: 57876480 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 147087360 unmapped: 57876480 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc400 session 0x56111b51c1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc800 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bd400 session 0x56111f863c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 147087360 unmapped: 57876480 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2568826 data_alloc: 234881024 data_used: 19599309
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bd000 session 0x56111a52b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 147218432 unmapped: 57745408 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x561118cfd400 session 0x56111b51a380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc400 session 0x56111b39ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 58613760 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b334800 session 0x56111ac5b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 58613760 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f550d000/0x0/0x4ffc00000, data 0x560e262/0x57df000, compress 0x0/0x0/0x0, omap 0x34662, meta 0x4edb99e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 328 handle_osd_map epochs [328,329], i have 329, src has [1,329]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.684011459s of 10.954308510s, submitted: 95
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 329 ms_handle_reset con 0x5611213e5c00 session 0x56111b194700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 58613760 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 55148544 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 330 ms_handle_reset con 0x5611197de800 session 0x561119510a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 330 ms_handle_reset con 0x561118cfd400 session 0x56111ae59500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2570698 data_alloc: 234881024 data_used: 24315389
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 330 heartbeat osd_stat(store_statfs(0x4f5a12000/0x0/0x4ffc00000, data 0x51079fa/0x52da000, compress 0x0/0x0/0x0, omap 0x3480a, meta 0x4edb7f6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 330 handle_osd_map epochs [331,331], i have 331, src has [1,331]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 52436992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 331 ms_handle_reset con 0x5611197de800 session 0x56111ae58a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 52436992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 52420608 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 332 ms_handle_reset con 0x56111b334800 session 0x56111b39d180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 332 ms_handle_reset con 0x56111b7bc400 session 0x56111b51c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 332 ms_handle_reset con 0x5611213e5c00 session 0x561119549c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152567808 unmapped: 52396032 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 333 ms_handle_reset con 0x561118cfd400 session 0x56111b195500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152567808 unmapped: 52396032 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 333 heartbeat osd_stat(store_statfs(0x4f5a05000/0x0/0x4ffc00000, data 0x510cdb2/0x52e3000, compress 0x0/0x0/0x0, omap 0x349b2, meta 0x4edb64e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x5611197de800 session 0x56111ab3efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f5a05000/0x0/0x4ffc00000, data 0x510cdb2/0x52e3000, compress 0x0/0x0/0x0, omap 0x349b2, meta 0x4edb64e), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2588514 data_alloc: 234881024 data_used: 24315974
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152567808 unmapped: 52396032 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x5611213e4400 session 0x56111b195a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x5611213e5800 session 0x56111f863880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152633344 unmapped: 52330496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x56111b334800 session 0x561118d896c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x5611197de800 session 0x56111b51c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 56279040 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.563458443s of 10.001652718s, submitted: 179
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158531584 unmapped: 46432256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 44974080 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2576999 data_alloc: 234881024 data_used: 16956388
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f4601000/0x0/0x4ffc00000, data 0x5341e96/0x551b000, compress 0x0/0x0/0x0, omap 0x33b40, meta 0x607c4c0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 336 ms_handle_reset con 0x5611213e4400 session 0x56111e5cf880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f4601000/0x0/0x4ffc00000, data 0x5341e96/0x551b000, compress 0x0/0x0/0x0, omap 0x33b40, meta 0x607c4c0), peers [0,1] op hist [0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 336 handle_osd_map epochs [337,337], i have 337, src has [1,337]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 336 handle_osd_map epochs [337,337], i have 337, src has [1,337]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 337 ms_handle_reset con 0x5611213e5800 session 0x561118b86000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2579971 data_alloc: 234881024 data_used: 16956388
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158941184 unmapped: 46022656 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 338 ms_handle_reset con 0x56111b7bc400 session 0x56111b1948c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158941184 unmapped: 46022656 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158990336 unmapped: 45973504 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x561118cfd400 session 0x561119511340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x5611197de800 session 0x56111b595a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.417187691s of 10.007616043s, submitted: 192
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bc800 session 0x56111f8636c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bd400 session 0x56111f8621c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bc400 session 0x561118b861c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 46211072 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x561118cfd400 session 0x56111990ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f4626000/0x0/0x4ffc00000, data 0x53470ed/0x5524000, compress 0x0/0x0/0x0, omap 0x33f7c, meta 0x607c084), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x5611197de800 session 0x56111b51c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bc800 session 0x56111ae59500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bd400 session 0x56111b194700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 45817856 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2591829 data_alloc: 234881024 data_used: 16977453
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 45817856 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f45fa000/0x0/0x4ffc00000, data 0x53710e9/0x554f000, compress 0x0/0x0/0x0, omap 0x3407b, meta 0x607bf85), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x5611213e5800 session 0x56111ae58fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 42377216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 340 ms_handle_reset con 0x5611197de800 session 0x561118d88380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 340 ms_handle_reset con 0x561118cfd400 session 0x56111990f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 42377216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162603008 unmapped: 42360832 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 340 ms_handle_reset con 0x56111b7bc800 session 0x56111f862540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f45f5000/0x0/0x4ffc00000, data 0x5372ca1/0x5552000, compress 0x0/0x0/0x0, omap 0x3407b, meta 0x607bf85), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162635776 unmapped: 42328064 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 341 ms_handle_reset con 0x56111b7bd400 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2640081 data_alloc: 234881024 data_used: 24678426
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162643968 unmapped: 42319872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 342 ms_handle_reset con 0x5611197dfc00 session 0x56111a52afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 342 heartbeat osd_stat(store_statfs(0x4f45f5000/0x0/0x4ffc00000, data 0x5374720/0x5555000, compress 0x0/0x0/0x0, omap 0x34299, meta 0x607bd67), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 342 ms_handle_reset con 0x561118cfd400 session 0x56111a52ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 342 ms_handle_reset con 0x5611197de800 session 0x56111b55efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 42311680 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 42311680 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 42311680 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 42311680 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.657952309s of 11.761885643s, submitted: 51
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 342 heartbeat osd_stat(store_statfs(0x4f4421000/0x0/0x4ffc00000, data 0x554a300/0x572b000, compress 0x0/0x0/0x0, omap 0x34299, meta 0x607bd67), peers [0,1] op hist [1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2700779 data_alloc: 234881024 data_used: 24791322
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166469632 unmapped: 38494208 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166526976 unmapped: 38436864 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165847040 unmapped: 39116800 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165847040 unmapped: 39116800 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 342 handle_osd_map epochs [342,343], i have 343, src has [1,343]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165847040 unmapped: 39116800 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f3b50000/0x0/0x4ffc00000, data 0x5e1b300/0x5ffc000, compress 0x0/0x0/0x0, omap 0x34299, meta 0x607bd67), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2720333 data_alloc: 234881024 data_used: 25899290
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 39092224 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 39092224 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x56111b7bc800 session 0x56111a5dae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 39092224 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x56111b7bd400 session 0x56111e5ce700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x5611197df800 session 0x56111b55f340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 39714816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x56111ad4d400 session 0x56111ab3ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 39714816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f3b2b000/0x0/0x4ffc00000, data 0x5e3cdf1/0x6021000, compress 0x0/0x0/0x0, omap 0x3430f, meta 0x607bcf1), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721481 data_alloc: 234881024 data_used: 25899290
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.289365768s of 10.548576355s, submitted: 113
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x56111b7ac400 session 0x56111b39c8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 39714816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 39714816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 344 ms_handle_reset con 0x56111b7a6c00 session 0x56111b7b0e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 344 ms_handle_reset con 0x56111b7a7400 session 0x56111f863dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 39542784 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 345 ms_handle_reset con 0x561118cfd400 session 0x56111e5ce540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 345 ms_handle_reset con 0x56111b7ac000 session 0x56111a52b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 39542784 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 346 ms_handle_reset con 0x56111ad4d400 session 0x56111b51a700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 346 ms_handle_reset con 0x56111b7a6c00 session 0x561118b86c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f3b15000/0x0/0x4ffc00000, data 0x5e4c17b/0x6035000, compress 0x0/0x0/0x0, omap 0x34767, meta 0x607b899), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2734496 data_alloc: 234881024 data_used: 26034629
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 346 ms_handle_reset con 0x56111b7ac400 session 0x56111ab0afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 346 handle_osd_map epochs [346,347], i have 347, src has [1,347]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 347 ms_handle_reset con 0x561118cfd400 session 0x56111b64e1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 348 ms_handle_reset con 0x56111ad4d400 session 0x56111b55f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f3b0c000/0x0/0x4ffc00000, data 0x5e4f94d/0x603c000, compress 0x0/0x0/0x0, omap 0x34af5, meta 0x607b50b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2741367 data_alloc: 234881024 data_used: 26034727
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f3b0d000/0x0/0x4ffc00000, data 0x5e5294d/0x603f000, compress 0x0/0x0/0x0, omap 0x34af5, meta 0x607b50b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165511168 unmapped: 39452672 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.281254768s of 10.366102219s, submitted: 43
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 349 ms_handle_reset con 0x56111b7a6c00 session 0x56111b7b0c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165740544 unmapped: 39223296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165896192 unmapped: 39067648 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f3b09000/0x0/0x4ffc00000, data 0x5e544db/0x6041000, compress 0x0/0x0/0x0, omap 0x34b7b, meta 0x607b485), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 349 ms_handle_reset con 0x56111b7ac000 session 0x56111b1941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f3b08000/0x0/0x4ffc00000, data 0x5e544eb/0x6042000, compress 0x0/0x0/0x0, omap 0x34b7b, meta 0x607b485), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165896192 unmapped: 39067648 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2757064 data_alloc: 251658240 data_used: 27389893
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 350 ms_handle_reset con 0x56111b7a0400 session 0x56111ae59340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 351 ms_handle_reset con 0x56111b7a0400 session 0x56111b51a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x561118cfd400 session 0x56111a5daa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x56111ad4d400 session 0x56111b39ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x56111b7a6c00 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x56111b7a1000 session 0x56111b7b01c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x56111ad4d400 session 0x561119511880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 352 heartbeat osd_stat(store_statfs(0x4f3afb000/0x0/0x4ffc00000, data 0x5e59d06/0x604d000, compress 0x0/0x0/0x0, omap 0x3504d, meta 0x607afb3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 353 ms_handle_reset con 0x56111b7a0400 session 0x56111b39d180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165961728 unmapped: 39002112 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 354 ms_handle_reset con 0x56111b7a6c00 session 0x561118d896c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 354 ms_handle_reset con 0x561118cfd400 session 0x561118b87a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2793907 data_alloc: 251658240 data_used: 27387207
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166035456 unmapped: 38928384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166035456 unmapped: 38928384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.075937271s of 11.210276604s, submitted: 70
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 354 ms_handle_reset con 0x56111b7a0c00 session 0x561119548700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166182912 unmapped: 38780928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 355 ms_handle_reset con 0x561118cfd400 session 0x561118b86000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166191104 unmapped: 38772736 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 355 heartbeat osd_stat(store_statfs(0x4f3aed000/0x0/0x4ffc00000, data 0x632d512/0x605f000, compress 0x0/0x0/0x0, omap 0x35663, meta 0x607a99d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 355 ms_handle_reset con 0x56111b7a0c00 session 0x56111ae59180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 355 ms_handle_reset con 0x56111b7ac000 session 0x56111b55ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166191104 unmapped: 38772736 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2828665 data_alloc: 251658240 data_used: 27388133
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166191104 unmapped: 38772736 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 356 ms_handle_reset con 0x56111b7a0400 session 0x56111e5cf6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 356 ms_handle_reset con 0x56111b7a6c00 session 0x56111e5ce1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 357 ms_handle_reset con 0x56111b7a6c00 session 0x56111e5cefc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 38748160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 358 ms_handle_reset con 0x561118cfd400 session 0x56111b64ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 358 ms_handle_reset con 0x56111ad4d400 session 0x56111b39d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f3ae1000/0x0/0x4ffc00000, data 0x6332848/0x6069000, compress 0x0/0x0/0x0, omap 0x3aadb, meta 0x6075525), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 38748160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f3adc000/0x0/0x4ffc00000, data 0x63343e4/0x606c000, compress 0x0/0x0/0x0, omap 0x3aadb, meta 0x6075525), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 38748160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 38748160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 359 ms_handle_reset con 0x56111b7a0400 session 0x56111990e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2839427 data_alloc: 251658240 data_used: 27389319
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 359 ms_handle_reset con 0x56111b7a0c00 session 0x56111f8628c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 38739968 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 359 ms_handle_reset con 0x56111ad4d400 session 0x56111b64e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f3adb000/0x0/0x4ffc00000, data 0x6335fd4/0x606f000, compress 0x0/0x0/0x0, omap 0x3acc3, meta 0x607533d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 360 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167370752 unmapped: 37593088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 360 ms_handle_reset con 0x56111b7a0400 session 0x56111f862000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.749612808s of 10.231152534s, submitted: 62
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 360 ms_handle_reset con 0x56111b7a6c00 session 0x56111b7b0a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167395328 unmapped: 37568512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167985152 unmapped: 36978688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 361 ms_handle_reset con 0x561119499c00 session 0x5611187a7c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 36732928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 362 ms_handle_reset con 0x561118cfd400 session 0x56111a5da000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 362 ms_handle_reset con 0x56111ad4d400 session 0x56111ae59180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2854296 data_alloc: 251658240 data_used: 28381050
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 36732928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 362 heartbeat osd_stat(store_statfs(0x4f3ad4000/0x0/0x4ffc00000, data 0x633ae12/0x6076000, compress 0x0/0x0/0x0, omap 0x3b131, meta 0x6074ecf), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 362 handle_osd_map epochs [363,363], i have 363, src has [1,363]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 36700160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168910848 unmapped: 36052992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 364 ms_handle_reset con 0x56111c997c00 session 0x56111ab3ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 364 ms_handle_reset con 0x56111b79cc00 session 0x56111a5da700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 365 ms_handle_reset con 0x56111b7a6c00 session 0x56111b195880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168968192 unmapped: 35995648 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 365 ms_handle_reset con 0x56111b7a0400 session 0x561119548700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 366 ms_handle_reset con 0x561118cfd400 session 0x56111b64f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f3ac8000/0x0/0x4ffc00000, data 0x6341e52/0x6080000, compress 0x0/0x0/0x0, omap 0x3b501, meta 0x6074aff), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169066496 unmapped: 35897344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2873249 data_alloc: 251658240 data_used: 29939966
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169164800 unmapped: 35799040 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f3ac7000/0x0/0x4ffc00000, data 0x6343919/0x6083000, compress 0x0/0x0/0x0, omap 0x3b53c, meta 0x6074ac4), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169172992 unmapped: 35790848 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 369 ms_handle_reset con 0x56111ad4d400 session 0x56111a5dae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 369 ms_handle_reset con 0x56111b79cc00 session 0x561119511880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 35758080 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.730643272s of 11.094212532s, submitted: 130
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 369 heartbeat osd_stat(store_statfs(0x4f3abf000/0x0/0x4ffc00000, data 0x634708d/0x6087000, compress 0x0/0x0/0x0, omap 0x3b724, meta 0x60748dc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 172507136 unmapped: 32456704 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 369 ms_handle_reset con 0x56111b7a6c00 session 0x56111b39d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 369 heartbeat osd_stat(store_statfs(0x4f384f000/0x0/0x4ffc00000, data 0x65bd08d/0x62fd000, compress 0x0/0x0/0x0, omap 0x3b724, meta 0x60748dc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 369 ms_handle_reset con 0x561118cfd400 session 0x56111f8628c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 369 handle_osd_map epochs [369,370], i have 370, src has [1,370]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2508781 data_alloc: 234881024 data_used: 19538694
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f6386000/0x0/0x4ffc00000, data 0x288ab0e/0x25ca000, compress 0x0/0x0/0x0, omap 0x3b810, meta 0x60747f0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f7582000/0x0/0x4ffc00000, data 0x288ab0e/0x25ca000, compress 0x0/0x0/0x0, omap 0x3b810, meta 0x60747f0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 370 ms_handle_reset con 0x56111ad4d400 session 0x56111ae58a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b79cc00 session 0x56111e5cf880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163610624 unmapped: 41353216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2516713 data_alloc: 234881024 data_used: 19542692
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f757c000/0x0/0x4ffc00000, data 0x288c5ef/0x25ce000, compress 0x0/0x0/0x0, omap 0x3bdd6, meta 0x607422a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163610624 unmapped: 41353216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a0400 session 0x56111f863880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111c997c00 session 0x56111ab0afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163610624 unmapped: 41353216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118cfd400 session 0x56111b55e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111ad4d400 session 0x56111990f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b79cc00 session 0x561118d89180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f7579000/0x0/0x4ffc00000, data 0x288c61e/0x25d1000, compress 0x0/0x0/0x0, omap 0x3bdd6, meta 0x607422a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 41345024 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a0400 session 0x56111b51a700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.235778809s of 10.372652054s, submitted: 81
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 41345024 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118d18800 session 0x56111a52aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118cfd400 session 0x56111ae58380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118d18800 session 0x56111990e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 41345024 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111ad4d400 session 0x56111ab3efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2518253 data_alloc: 234881024 data_used: 19546804
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163700736 unmapped: 41263104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b79cc00 session 0x56111b5941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163700736 unmapped: 41263104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f757e000/0x0/0x4ffc00000, data 0x288c5ef/0x25ce000, compress 0x0/0x0/0x0, omap 0x3bdd6, meta 0x607422a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a0400 session 0x56111b55f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163725312 unmapped: 41238528 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118cfd400 session 0x56111b594000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163708928 unmapped: 41254912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118d18800 session 0x56111b39c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111ad4d400 session 0x56111a52b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163725312 unmapped: 41238528 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2515330 data_alloc: 234881024 data_used: 19546788
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163725312 unmapped: 41238528 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b79cc00 session 0x56111b5941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f757d000/0x0/0x4ffc00000, data 0x288c600/0x25cf000, compress 0x0/0x0/0x0, omap 0x3bf84, meta 0x607407c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.940250397s of 10.053503990s, submitted: 50
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2519443 data_alloc: 234881024 data_used: 19542692
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a0400 session 0x56111b51a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7ac000 session 0x56111b5948c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a1400 session 0x5611187a6000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118cfd400 session 0x56111b39ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163741696 unmapped: 41222144 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x561118d18800 session 0x56111b194c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163741696 unmapped: 41222144 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 372 heartbeat osd_stat(store_statfs(0x4f7578000/0x0/0x4ffc00000, data 0x288e1cb/0x25d1000, compress 0x0/0x0/0x0, omap 0x3bf84, meta 0x607407c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x56111ad4d400 session 0x56111990e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x561118cfd400 session 0x56111ab0b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 41172992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x56111b7a1400 session 0x56111b64fc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x561118d18800 session 0x56111b55ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2505874 data_alloc: 234881024 data_used: 19477156
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 41172992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 373 ms_handle_reset con 0x56111b7ac000 session 0x56111ae58fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f77eb000/0x0/0x4ffc00000, data 0x2619d77/0x235f000, compress 0x0/0x0/0x0, omap 0x412fc, meta 0x606ed04), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 41172992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f77eb000/0x0/0x4ffc00000, data 0x2619d77/0x235f000, compress 0x0/0x0/0x0, omap 0x412fc, meta 0x606ed04), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 373 ms_handle_reset con 0x56111b79cc00 session 0x56111990f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 41172992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x561118d18800 session 0x56111b595a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x561118cfd400 session 0x56111b64f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 44761088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x5611213e4400 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x5611213e5c00 session 0x56111b64fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.537055969s of 10.864589691s, submitted: 97
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x56111b7ac000 session 0x56111990ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 44761088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7a1400 session 0x56111b7b1180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2465158 data_alloc: 234881024 data_used: 14330595
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 44761088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118cfd400 session 0x56111f862000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118d18800 session 0x56111990e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7ac000 session 0x561118d89180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153214976 unmapped: 51748864 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 heartbeat osd_stat(store_statfs(0x4f8044000/0x0/0x4ffc00000, data 0x18f851f/0x1b07000, compress 0x0/0x0/0x0, omap 0x417ca, meta 0x606e836), peers [0,1] op hist [0,0,0,0,0,1,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x5611213e5c00 session 0x56111981b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x5611213e4400 session 0x56111e5cea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118cfd400 session 0x56111b7b01c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 51650560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118d18800 session 0x56111ae58380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7a1400 session 0x56111f863880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 51650560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7ac000 session 0x56111b7b0700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118cfd400 session 0x56111a5da700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 51650560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2529656 data_alloc: 218103808 data_used: 4760992
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 heartbeat osd_stat(store_statfs(0x4f5e4c000/0x0/0x4ffc00000, data 0x3af151f/0x3d00000, compress 0x0/0x0/0x0, omap 0x417ca, meta 0x606e836), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 51650560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7a1400 session 0x561118d896c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x5611213e4400 session 0x56111f862380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x561118d18800 session 0x56111ab3efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x56111b7a0400 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 51642368 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x561118cfd400 session 0x56111b55f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x561118d18800 session 0x56111b55e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 51642368 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x56111b7a0400 session 0x56111b594000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 377 ms_handle_reset con 0x56111b7a1400 session 0x56111f863880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153329664 unmapped: 51634176 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.584274292s of 10.083294868s, submitted: 80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 378 ms_handle_reset con 0x5611213e4400 session 0x56111e5cea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f5e48000/0x0/0x4ffc00000, data 0x3af4df2/0x3d02000, compress 0x0/0x0/0x0, omap 0x41a7a, meta 0x606e586), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 50585600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f5e43000/0x0/0x4ffc00000, data 0x3af68c5/0x3d05000, compress 0x0/0x0/0x0, omap 0x41b66, meta 0x606e49a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2536786 data_alloc: 218103808 data_used: 4760780
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f5e43000/0x0/0x4ffc00000, data 0x3af68c5/0x3d05000, compress 0x0/0x0/0x0, omap 0x41b66, meta 0x606e49a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 50585600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 378 ms_handle_reset con 0x561118cfd400 session 0x56111ab3efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 50585600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x561118d18800 session 0x56111f862000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 50577408 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111b7a1400 session 0x561119548e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111b7a0400 session 0x561119548700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111a4e8800 session 0x56111b51a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 50577408 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111b7a1400 session 0x56111990ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111b7a0400 session 0x56111f862380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f5e43000/0x0/0x4ffc00000, data 0x3af84df/0x3d09000, compress 0x0/0x0/0x0, omap 0x415f6, meta 0x606ea0a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f5e43000/0x0/0x4ffc00000, data 0x3af84df/0x3d09000, compress 0x0/0x0/0x0, omap 0x415f6, meta 0x606ea0a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154411008 unmapped: 50552832 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 380 ms_handle_reset con 0x56111a4e8400 session 0x56111e5cf180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2554301 data_alloc: 218103808 data_used: 5017486
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9000 session 0x56111ab0afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154427392 unmapped: 50536448 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f5e38000/0x0/0x4ffc00000, data 0x3afbb94/0x3d10000, compress 0x0/0x0/0x0, omap 0x4c6c8, meta 0x6063938), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154427392 unmapped: 50536448 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9800 session 0x56111990e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9400 session 0x56111a52ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e8400 session 0x56111990fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9000 session 0x561119511340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9800 session 0x56111b7b1180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154959872 unmapped: 50003968 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111b7a0400 session 0x56111f8628c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f59e5000/0x0/0x4ffc00000, data 0x3f50c58/0x4167000, compress 0x0/0x0/0x0, omap 0x4c923, meta 0x60636dd), peers [0,1] op hist [0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111b7a0400 session 0x5611187a6000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e8400 session 0x561118b876c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f59e5000/0x0/0x4ffc00000, data 0x3f50c58/0x4167000, compress 0x0/0x0/0x0, omap 0x4c923, meta 0x60636dd), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155000832 unmapped: 49963008 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 381 handle_osd_map epochs [381,382], i have 382, src has [1,382]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.685203552s of 10.008099556s, submitted: 109
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 382 ms_handle_reset con 0x56111a4e9000 session 0x56111990f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 382 ms_handle_reset con 0x56111a4e9400 session 0x56111b55f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 49954816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2596007 data_alloc: 218103808 data_used: 6243897
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 49954816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 382 ms_handle_reset con 0x56111a4e9800 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 382 ms_handle_reset con 0x56111a4e9000 session 0x56111e5cfdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 49954816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 383 ms_handle_reset con 0x56111a4e9400 session 0x56111b51c000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 383 ms_handle_reset con 0x56111a4e8400 session 0x56111b64fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 383 heartbeat osd_stat(store_statfs(0x4f59e1000/0x0/0x4ffc00000, data 0x3f52675/0x4169000, compress 0x0/0x0/0x0, omap 0x4cea6, meta 0x606315a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 156082176 unmapped: 48881664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 383 ms_handle_reset con 0x56111b7a1400 session 0x56111a5da700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 156090368 unmapped: 48873472 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e9c00 session 0x56111990e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111b7a0400 session 0x56111b595180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 156139520 unmapped: 48824320 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e8400 session 0x561119548a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2600976 data_alloc: 218103808 data_used: 6334426
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166731776 unmapped: 38232064 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e9000 session 0x56111b55e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e9400 session 0x56111a5dae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 36945920 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111b7a1400 session 0x56111b195880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168853504 unmapped: 36110336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e8400 session 0x56111990e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e9000 session 0x56111b195a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f5306000/0x0/0x4ffc00000, data 0x429ed91/0x44b6000, compress 0x0/0x0/0x0, omap 0x4d7e8, meta 0x6062818), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 36102144 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 384 handle_osd_map epochs [385,385], i have 385, src has [1,385]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 385 ms_handle_reset con 0x56111b7a1400 session 0x56111b64f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.623124123s of 10.003363609s, submitted: 199
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163094528 unmapped: 41869312 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111b7bd000 session 0x56111e5cf880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111b7bd800 session 0x56111981ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2675304 data_alloc: 234881024 data_used: 12639194
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 41549824 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111a4e8400 session 0x56111a52a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111a4e9000 session 0x56111f862540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163487744 unmapped: 41476096 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 386 heartbeat osd_stat(store_statfs(0x4f568a000/0x0/0x4ffc00000, data 0x42a23fb/0x44be000, compress 0x0/0x0/0x0, omap 0x4e275, meta 0x6061d8b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163487744 unmapped: 41476096 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111b7a1400 session 0x56111f8628c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111b7bd800 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163684352 unmapped: 41279488 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 387 ms_handle_reset con 0x56111b7bc400 session 0x56111a5dbc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 387 ms_handle_reset con 0x56111b7bd000 session 0x56111b55e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 387 ms_handle_reset con 0x56111b7bc400 session 0x56111b595dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2677018 data_alloc: 234881024 data_used: 12639194
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 387 heartbeat osd_stat(store_statfs(0x4f5689000/0x0/0x4ffc00000, data 0x42a3feb/0x44c1000, compress 0x0/0x0/0x0, omap 0x4e7e6, meta 0x606181a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x56111a4e8400 session 0x56111a52ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x561118cfd400 session 0x56111b194c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x561118d18800 session 0x561119832e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163635200 unmapped: 41328640 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x561118cfd400 session 0x56111b39c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f5688000/0x0/0x4ffc00000, data 0x42a5ba3/0x44c4000, compress 0x0/0x0/0x0, omap 0x4ecec, meta 0x6061314), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x561118d18800 session 0x56111ab0afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 38739968 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x56111a4e9000 session 0x56111e5ce8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x56111b7a1400 session 0x561119511340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.728258133s of 10.006414413s, submitted: 167
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167690240 unmapped: 37273600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2719929 data_alloc: 234881024 data_used: 13741018
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f515f000/0x0/0x4ffc00000, data 0x47cb622/0x49eb000, compress 0x0/0x0/0x0, omap 0x4f464, meta 0x6060b9c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2719929 data_alloc: 234881024 data_used: 13741018
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167755776 unmapped: 37208064 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f515f000/0x0/0x4ffc00000, data 0x47cb622/0x49eb000, compress 0x0/0x0/0x0, omap 0x4f464, meta 0x6060b9c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111b7bd800 session 0x561119548e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111a4e9400 session 0x56111b594540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111b7a0400 session 0x561119510c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.809198380s of 10.028366089s, submitted: 49
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x561118cfd400 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x561118d18800 session 0x56111ae59c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111b7a1400 session 0x56111b64efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111a4e9000 session 0x561118ca61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x561118cfd400 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721301 data_alloc: 234881024 data_used: 13745130
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x561118d18800 session 0x561118d88c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f515d000/0x0/0x4ffc00000, data 0x47cd1be/0x49ee000, compress 0x0/0x0/0x0, omap 0x4f65c, meta 0x60609a4), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111b7a0400 session 0x56111b7b0c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 391 ms_handle_reset con 0x56111a4e9400 session 0x56111b594000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x561118cfd400 session 0x56111a52aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f515a000/0x0/0x4ffc00000, data 0x47ced7b/0x49ef000, compress 0x0/0x0/0x0, omap 0x4fb06, meta 0x60604fa), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x561118d18800 session 0x56111990ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x56111a4e9000 session 0x56111990e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x56111b7a0400 session 0x56111e5ce540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2730363 data_alloc: 234881024 data_used: 13749175
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x56111b7bcc00 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x561118d18800 session 0x56111b64e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x561118cfd400 session 0x561118ca61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x56111a4e9000 session 0x56111b594540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5153000/0x0/0x4ffc00000, data 0x47d0966/0x49f4000, compress 0x0/0x0/0x0, omap 0x4fcff, meta 0x6060301), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 393 ms_handle_reset con 0x56111b7a0400 session 0x561119511340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167796736 unmapped: 37167104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2739773 data_alloc: 234881024 data_used: 13900180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 394 ms_handle_reset con 0x56111a6b2800 session 0x56111b195880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167796736 unmapped: 37167104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 394 ms_handle_reset con 0x56111a6b2800 session 0x56111b55e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.383215904s of 11.524516106s, submitted: 64
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 394 ms_handle_reset con 0x561118cfd400 session 0x56111a5dbc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 394 ms_handle_reset con 0x56111a4e9000 session 0x56111b7b01c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 395 ms_handle_reset con 0x56111a6b3c00 session 0x56111b7b0700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f5151000/0x0/0x4ffc00000, data 0x47d3fc0/0x49fb000, compress 0x0/0x0/0x0, omap 0x5071d, meta 0x605f8e3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2747580 data_alloc: 234881024 data_used: 13973703
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 395 ms_handle_reset con 0x56111a6b2c00 session 0x56111a5db180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 395 ms_handle_reset con 0x561118cfd400 session 0x56111b39d340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 395 ms_handle_reset con 0x56111a4e9000 session 0x561119511dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f514b000/0x0/0x4ffc00000, data 0x47d5b7f/0x49ff000, compress 0x0/0x0/0x0, omap 0x50891, meta 0x605f76f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a6b2800 session 0x56111b7b1340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a6b3000 session 0x56111f863500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a6b3c00 session 0x56111e5cfdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f5147000/0x0/0x4ffc00000, data 0x47d775d/0x4a03000, compress 0x0/0x0/0x0, omap 0x50f80, meta 0x605f080), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2768378 data_alloc: 234881024 data_used: 15890154
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x561118cfd400 session 0x56111ab0b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a4e9000 session 0x56111ae58000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.716024399s of 10.781334877s, submitted: 42
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168050688 unmapped: 36913152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a6b2800 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 397 ms_handle_reset con 0x56111a6b3000 session 0x56111a52a1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168214528 unmapped: 36749312 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x47d775d/0x4a03000, compress 0x0/0x0/0x0, omap 0x50f80, meta 0x605f080), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 397 ms_handle_reset con 0x56111b7a5c00 session 0x56111e5ce8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 36708352 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 397 ms_handle_reset con 0x561118cfd400 session 0x56111ab3ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 36700160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 398 ms_handle_reset con 0x56111a4e9000 session 0x56111b51d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2779814 data_alloc: 234881024 data_used: 16459479
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 398 ms_handle_reset con 0x56111a6b2800 session 0x56111ab0afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 398 ms_handle_reset con 0x56111a6b3000 session 0x56111981b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f5146000/0x0/0x4ffc00000, data 0x47daee4/0x4a06000, compress 0x0/0x0/0x0, omap 0x51851, meta 0x605e7af), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 398 handle_osd_map epochs [398,399], i have 399, src has [1,399]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f5141000/0x0/0x4ffc00000, data 0x47dc99b/0x4a09000, compress 0x0/0x0/0x0, omap 0x51a3b, meta 0x605e5c5), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2780659 data_alloc: 234881024 data_used: 16460689
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 399 ms_handle_reset con 0x56111b7a1000 session 0x56111b51d340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.611611366s of 10.825484276s, submitted: 107
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 399 ms_handle_reset con 0x561118cfd400 session 0x56111b55fc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 36618240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 36618240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 36610048 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 400 ms_handle_reset con 0x56111a4e9000 session 0x56111aaac000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 400 ms_handle_reset con 0x56111a6b2800 session 0x56111e156e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2790394 data_alloc: 234881024 data_used: 16456691
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168427520 unmapped: 36536320 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x56111a6b3000 session 0x561118b87a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f513d000/0x0/0x4ffc00000, data 0x47de5a9/0x4a0e000, compress 0x0/0x0/0x0, omap 0x5249f, meta 0x605db61), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x561118d18000 session 0x561119511340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168460288 unmapped: 36503552 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x561118cfd400 session 0x56111b39cc40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f5138000/0x0/0x4ffc00000, data 0x47e0199/0x4a11000, compress 0x0/0x0/0x0, omap 0x5299a, meta 0x605d666), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168460288 unmapped: 36503552 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x56111a4e9000 session 0x56111b7b1dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x56111a6b2800 session 0x56111b55f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168558592 unmapped: 36405248 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168583168 unmapped: 36380672 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2792968 data_alloc: 234881024 data_used: 16457178
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168583168 unmapped: 36380672 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168583168 unmapped: 36380672 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f5138000/0x0/0x4ffc00000, data 0x47e1ba6/0x4a12000, compress 0x0/0x0/0x0, omap 0x52dec, meta 0x605d214), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x56111a6b3000 session 0x561118b86540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168689664 unmapped: 36274176 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x56111b7bd400 session 0x561118b86000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x56111b7bc800 session 0x56111b594a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x561118cfd400 session 0x56111f863880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 36159488 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.948968887s of 12.101161003s, submitted: 98
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x56111a4e9000 session 0x56111b194700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 36159488 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 403 ms_handle_reset con 0x56111a6b2800 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f5134000/0x0/0x4ffc00000, data 0x47e3752/0x4a16000, compress 0x0/0x0/0x0, omap 0x52fe7, meta 0x605d019), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2802574 data_alloc: 234881024 data_used: 17395162
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 36134912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 404 ms_handle_reset con 0x56111a6b3000 session 0x56111990ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 36118528 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168853504 unmapped: 36110336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 404 ms_handle_reset con 0x561118cfd400 session 0x56111b51cfc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f512e000/0x0/0x4ffc00000, data 0x47e5350/0x4a1a000, compress 0x0/0x0/0x0, omap 0x5341d, meta 0x605cbe3), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 404 ms_handle_reset con 0x56111a6b2800 session 0x56111b51c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 404 ms_handle_reset con 0x56111a4e9000 session 0x56111b7b1500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166928384 unmapped: 38035456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 405 ms_handle_reset con 0x56111a4ed800 session 0x561118d89180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 405 ms_handle_reset con 0x56111b79a000 session 0x56111b51b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166936576 unmapped: 38027264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 405 handle_osd_map epochs [405,406], i have 406, src has [1,406]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 406 ms_handle_reset con 0x56111b79bc00 session 0x56111b55e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 406 ms_handle_reset con 0x56111b7bc800 session 0x56111990f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2529523 data_alloc: 234881024 data_used: 10969052
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f80b2000/0x0/0x4ffc00000, data 0x185fb3c/0x1a98000, compress 0x0/0x0/0x0, omap 0x54040, meta 0x605bfc0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166944768 unmapped: 38019072 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 406 ms_handle_reset con 0x56111a4e9000 session 0x56111b1948c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 407 ms_handle_reset con 0x561118cfd400 session 0x56111f862a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166944768 unmapped: 38019072 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166961152 unmapped: 38002688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 407 ms_handle_reset con 0x56111a6b2800 session 0x561119549340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f80ad000/0x0/0x4ffc00000, data 0x18616da/0x1a9b000, compress 0x0/0x0/0x0, omap 0x545e9, meta 0x605ba17), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 408 ms_handle_reset con 0x561118cfd400 session 0x56111e5ce380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166969344 unmapped: 37994496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 408 ms_handle_reset con 0x56111a4e9000 session 0x56111b51d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 409 ms_handle_reset con 0x56111a4ed800 session 0x56111f862000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.935689926s of 10.119892120s, submitted: 98
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x56111b7bc800 session 0x56111b51c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x56111b79bc00 session 0x56111a52b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x56111c913000 session 0x561118b868c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2542153 data_alloc: 234881024 data_used: 10969539
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168198144 unmapped: 36765696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x561118cfd400 session 0x56111981ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168206336 unmapped: 36757504 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168206336 unmapped: 36757504 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x56111a4ed800 session 0x56111b5941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 410 heartbeat osd_stat(store_statfs(0x4f80a3000/0x0/0x4ffc00000, data 0x1866bd3/0x1aa3000, compress 0x0/0x0/0x0, omap 0x55711, meta 0x605a8ef), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 36732928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 411 ms_handle_reset con 0x56111b7bc800 session 0x56111ac5b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x56111af3d000 session 0x561119510a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x56111a4e9000 session 0x56111ae59500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f809e000/0x0/0x4ffc00000, data 0x186a415/0x1aaa000, compress 0x0/0x0/0x0, omap 0x560b1, meta 0x6059f4f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 36700160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x561118cfd400 session 0x5611187a6a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x56111a4ed800 session 0x561119833a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2550467 data_alloc: 234881024 data_used: 10973733
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 36700160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x56111b7bc800 session 0x561118ca6540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f809e000/0x0/0x4ffc00000, data 0x186a415/0x1aaa000, compress 0x0/0x0/0x0, omap 0x560b1, meta 0x6059f4f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 36732928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 413 ms_handle_reset con 0x56111c913000 session 0x56111ab0bc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 413 ms_handle_reset con 0x561118cfd400 session 0x56111b55e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 36675584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 413 ms_handle_reset con 0x56111a4e9000 session 0x56111990f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 36675584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f809c000/0x0/0x4ffc00000, data 0x186c067/0x1aae000, compress 0x0/0x0/0x0, omap 0x56319, meta 0x6059ce7), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 36675584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f809c000/0x0/0x4ffc00000, data 0x186c067/0x1aae000, compress 0x0/0x0/0x0, omap 0x56319, meta 0x6059ce7), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.585222244s of 10.777623177s, submitted: 108
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 413 ms_handle_reset con 0x56111a4ed800 session 0x56111e5cf180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2556916 data_alloc: 234881024 data_used: 10973831
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168321024 unmapped: 36642816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 414 ms_handle_reset con 0x56111b335800 session 0x56111b55e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 415 ms_handle_reset con 0x561119816c00 session 0x56111b39d880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 415 ms_handle_reset con 0x56111b7bc800 session 0x56111981b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168337408 unmapped: 36626432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 416 ms_handle_reset con 0x561118cfd400 session 0x561118b861c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 416 ms_handle_reset con 0x56111a4e9000 session 0x56111b39ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 36618240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 416 ms_handle_reset con 0x56111a4ed800 session 0x56111b7b1880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 36618240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f808d000/0x0/0x4ffc00000, data 0x1871439/0x1ab9000, compress 0x0/0x0/0x0, omap 0x56da4, meta 0x605925c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 36610048 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 417 ms_handle_reset con 0x56111b7ae400 session 0x56111ab0ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 417 ms_handle_reset con 0x56111b79e000 session 0x56111b7b0700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2573195 data_alloc: 234881024 data_used: 10974362
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 417 ms_handle_reset con 0x561118cfd400 session 0x56111b55efc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168615936 unmapped: 36347904 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 418 ms_handle_reset con 0x56111a4e9000 session 0x56111e156fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 418 ms_handle_reset con 0x56111a4ed800 session 0x561118b86380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 418 ms_handle_reset con 0x56111b335800 session 0x56111ab3ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168624128 unmapped: 36339712 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 419 ms_handle_reset con 0x561118cfd400 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 419 ms_handle_reset con 0x56111a4e9000 session 0x56111b55ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 31719424 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f7bba000/0x0/0x4ffc00000, data 0x1d43afd/0x1f90000, compress 0x0/0x0/0x0, omap 0x57112, meta 0x6058eee), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173301760 unmapped: 31662080 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 420 ms_handle_reset con 0x56111a4ed800 session 0x561119511180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 34856960 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2613030 data_alloc: 234881024 data_used: 12158890
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.196111679s of 10.401690483s, submitted: 139
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170115072 unmapped: 34848768 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 421 ms_handle_reset con 0x56111b79e000 session 0x56111e5cfa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170123264 unmapped: 34840576 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f7bb8000/0x0/0x4ffc00000, data 0x1d48cfc/0x1f92000, compress 0x0/0x0/0x0, omap 0x57961, meta 0x605869f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170139648 unmapped: 34824192 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f7bb8000/0x0/0x4ffc00000, data 0x1d48cfc/0x1f92000, compress 0x0/0x0/0x0, omap 0x57961, meta 0x605869f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 422 ms_handle_reset con 0x56111b7bc800 session 0x56111a52bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 35659776 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 424 ms_handle_reset con 0x561118cfd400 session 0x5611187a7180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 424 ms_handle_reset con 0x56111a4e9000 session 0x561119511dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169312256 unmapped: 35651584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2623198 data_alloc: 234881024 data_used: 12160489
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169312256 unmapped: 35651584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 35643392 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 35643392 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f7bad000/0x0/0x4ffc00000, data 0x1d4e14e/0x1f99000, compress 0x0/0x0/0x0, omap 0x5877b, meta 0x6057885), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2649216 data_alloc: 234881024 data_used: 12135913
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.967669487s of 12.112925529s, submitted: 80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 424 ms_handle_reset con 0x56111a4ed800 session 0x56111981a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f7bae000/0x0/0x4ffc00000, data 0x205914e/0x1f9e000, compress 0x0/0x0/0x0, omap 0x58aa4, meta 0x605755c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 425 ms_handle_reset con 0x56111af3dc00 session 0x56111b7b1dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 426 ms_handle_reset con 0x56111b79e000 session 0x56111b194e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170450944 unmapped: 34512896 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2657200 data_alloc: 234881024 data_used: 12140009
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 34480128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 427 ms_handle_reset con 0x561118cfd400 session 0x561118d88380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170516480 unmapped: 34447360 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f7b9f000/0x0/0x4ffc00000, data 0x205e51e/0x1fa7000, compress 0x0/0x0/0x0, omap 0x597cc, meta 0x6056834), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171057152 unmapped: 33906688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111a4e9000 session 0x56111aaac000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111a4ed800 session 0x56111f863180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111af3dc00 session 0x56111b195dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111af3c800 session 0x56111b7b0c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x561118cfd400 session 0x56111b39da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f7800000/0x0/0x4ffc00000, data 0x240012a/0x234a000, compress 0x0/0x0/0x0, omap 0x59be4, meta 0x605641c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f7800000/0x0/0x4ffc00000, data 0x240012a/0x234a000, compress 0x0/0x0/0x0, omap 0x59be4, meta 0x605641c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2697314 data_alloc: 234881024 data_used: 12140009
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f7800000/0x0/0x4ffc00000, data 0x240012a/0x234a000, compress 0x0/0x0/0x0, omap 0x59be4, meta 0x605641c), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111a4e9000 session 0x56111aaad180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111a4ed800 session 0x561118d89dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111af3dc00 session 0x56111b51c000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.563882828s of 10.821042061s, submitted: 94
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111984dc00 session 0x5611187a7500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170975232 unmapped: 33988608 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f77d1000/0x0/0x4ffc00000, data 0x242bc14/0x2379000, compress 0x0/0x0/0x0, omap 0x59e42, meta 0x60561be), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170999808 unmapped: 33964032 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111b334800 session 0x56111b51d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111b7aec00 session 0x561119511c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2724221 data_alloc: 234881024 data_used: 15364585
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111a4ed800 session 0x56111981ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 33644544 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 33644544 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 33644544 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111af3dc00 session 0x56111b195c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111a6b2400 session 0x56111e5cfdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171376640 unmapped: 33587200 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f7d11000/0x0/0x4ffc00000, data 0x1eedc14/0x1e3b000, compress 0x0/0x0/0x0, omap 0x59e6f, meta 0x6056191), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111a4ed800 session 0x56111981ae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171376640 unmapped: 33587200 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2676901 data_alloc: 234881024 data_used: 15323625
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 33570816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 33570816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 430 ms_handle_reset con 0x561118d18800 session 0x56111b64e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 430 ms_handle_reset con 0x56111b7a0400 session 0x56111f863a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 430 ms_handle_reset con 0x56111a6b2400 session 0x56111b7b1a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171417600 unmapped: 33546240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.976639748s of 10.126955986s, submitted: 81
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 430 ms_handle_reset con 0x56111af3dc00 session 0x56111a52a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171417600 unmapped: 33546240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 33529856 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f7891000/0x0/0x4ffc00000, data 0x205c2e5/0x22b3000, compress 0x0/0x0/0x0, omap 0x5a799, meta 0x6055867), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 432 ms_handle_reset con 0x561118d18800 session 0x56111a52a540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2720136 data_alloc: 234881024 data_used: 15517357
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 172007424 unmapped: 32956416 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 432 ms_handle_reset con 0x56111a4ed800 session 0x56111b39c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 432 ms_handle_reset con 0x56111a6b2400 session 0x56111b194700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167165952 unmapped: 37797888 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 433 ms_handle_reset con 0x56111b7a0400 session 0x561119548380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f7898000/0x0/0x4ffc00000, data 0x16e4e1f/0x193c000, compress 0x0/0x0/0x0, omap 0x5a910, meta 0x60556f0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167231488 unmapped: 37732352 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 433 ms_handle_reset con 0x56111b334800 session 0x561118ca6e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166567936 unmapped: 38395904 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f820a000/0x0/0x4ffc00000, data 0x16ea9ad/0x1942000, compress 0x0/0x0/0x0, omap 0x5ad7d, meta 0x6055283), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166567936 unmapped: 38395904 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 433 ms_handle_reset con 0x561118d18800 session 0x56111b55e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2613050 data_alloc: 218103808 data_used: 8288745
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166567936 unmapped: 38395904 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 434 ms_handle_reset con 0x56111a4ed800 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167624704 unmapped: 37339136 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 434 ms_handle_reset con 0x56111a6b2400 session 0x56111b64fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167624704 unmapped: 37339136 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 435 ms_handle_reset con 0x56111b7a0400 session 0x56111a52b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.713693619s of 10.039623260s, submitted: 111
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167632896 unmapped: 37330944 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 436 ms_handle_reset con 0x56111b7aec00 session 0x56111b195180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 436 ms_handle_reset con 0x561118d18800 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f81fe000/0x0/0x4ffc00000, data 0x16efc52/0x194c000, compress 0x0/0x0/0x0, omap 0x5b552, meta 0x6054aae), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2623231 data_alloc: 218103808 data_used: 8289358
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 437 ms_handle_reset con 0x56111a6b2400 session 0x561118d881c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 438 ms_handle_reset con 0x56111b7a0400 session 0x56111e5cfc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f81f5000/0x0/0x4ffc00000, data 0x16f33b8/0x1953000, compress 0x0/0x0/0x0, omap 0x5b844, meta 0x60547bc), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 438 ms_handle_reset con 0x56111b4a7c00 session 0x56111981ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2635617 data_alloc: 218103808 data_used: 8289943
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111b7aec00 session 0x561118d88c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167862272 unmapped: 37101568 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111a6b2400 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111b4a7c00 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111b7a0400 session 0x56111b194700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111ac2a400 session 0x56111b39c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 440 ms_handle_reset con 0x561118d18800 session 0x56111b39c8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 440 ms_handle_reset con 0x561118d18800 session 0x56111a5dba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 37085184 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 440 ms_handle_reset con 0x56111a6b2400 session 0x56111f8621c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 441 ms_handle_reset con 0x56111ac2a400 session 0x56111f863a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167895040 unmapped: 37068800 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f81ce000/0x0/0x4ffc00000, data 0x1711cd0/0x197a000, compress 0x0/0x0/0x0, omap 0x5bf65, meta 0x605409b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 441 ms_handle_reset con 0x56111b7a0400 session 0x56111ae58fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.896823883s of 10.002335548s, submitted: 70
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 442 ms_handle_reset con 0x56111b4a7c00 session 0x561118b868c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 442 ms_handle_reset con 0x5611197c0000 session 0x56111b64f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167919616 unmapped: 37044224 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 442 ms_handle_reset con 0x5611197c0400 session 0x561118d88380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 443 ms_handle_reset con 0x561118d18800 session 0x56111e5ce8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167936000 unmapped: 37027840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 443 ms_handle_reset con 0x56111ac2a400 session 0x56111b195c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 443 ms_handle_reset con 0x56111b4a7c00 session 0x56111b64e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2658607 data_alloc: 218103808 data_used: 8291714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x56111a6b2400 session 0x56111b7b1500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167976960 unmapped: 36986880 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x561118d18800 session 0x56111a52afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167976960 unmapped: 36986880 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x56111a4ed800 session 0x56111a5da700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x5611197c0000 session 0x56111e157180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x5611197c0400 session 0x56111990e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f81c2000/0x0/0x4ffc00000, data 0x1718bae/0x1984000, compress 0x0/0x0/0x0, omap 0x5cf57, meta 0x60530a9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167976960 unmapped: 36986880 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x5611197c0000 session 0x561119549c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x56111a4ed800 session 0x56111990e8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 445 ms_handle_reset con 0x561118d18800 session 0x56111e5cea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167985152 unmapped: 36978688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 445 ms_handle_reset con 0x56111a6b2400 session 0x56111b51ce00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167985152 unmapped: 36978688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 446 ms_handle_reset con 0x56111ac2a400 session 0x56111e5ce540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2664043 data_alloc: 218103808 data_used: 8291926
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 447 ms_handle_reset con 0x561118d18800 session 0x56111a5dae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2401.1 total, 600.0 interval#012Cumulative writes: 19K writes, 75K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 6579 syncs, 2.95 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9480 writes, 34K keys, 9480 commit groups, 1.0 writes per commit group, ingest: 32.94 MB, 0.05 MB/s#012Interval WAL: 9480 writes, 3925 syncs, 2.42 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168001536 unmapped: 36962304 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 447 ms_handle_reset con 0x5611197c0000 session 0x56111b39da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 447 ms_handle_reset con 0x56111a4ed800 session 0x56111b51c000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 447 ms_handle_reset con 0x56111b7a0400 session 0x56111a5da000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 36937728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f81bf000/0x0/0x4ffc00000, data 0x171de92/0x1989000, compress 0x0/0x0/0x0, omap 0x5d6c9, meta 0x6052937), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 36937728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.854352951s of 10.016167641s, submitted: 96
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x56111a6b2400 session 0x56111b64fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670935 data_alloc: 218103808 data_used: 8394212
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x5611197c0000 session 0x56111b64ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x561118d18800 session 0x56111ae59880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f81b1000/0x0/0x4ffc00000, data 0x172b9b3/0x1999000, compress 0x0/0x0/0x0, omap 0x5d92a, meta 0x60526d6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 36921344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f81b1000/0x0/0x4ffc00000, data 0x172b9b3/0x1999000, compress 0x0/0x0/0x0, omap 0x5d92a, meta 0x60526d6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 36921344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f81b1000/0x0/0x4ffc00000, data 0x172b9b3/0x1999000, compress 0x0/0x0/0x0, omap 0x5d92a, meta 0x60526d6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2672641 data_alloc: 218103808 data_used: 8394212
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 36921344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168050688 unmapped: 36913152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168050688 unmapped: 36913152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x56111a6b2400 session 0x56111aaac000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x56111a4ed800 session 0x56111b7b0c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168058880 unmapped: 36904960 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168058880 unmapped: 36904960 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f81b2000/0x0/0x4ffc00000, data 0x172b9c2/0x199a000, compress 0x0/0x0/0x0, omap 0x5d92a, meta 0x60526d6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2673064 data_alloc: 218103808 data_used: 8394212
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x56111b7a0400 session 0x56111990fc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168058880 unmapped: 36904960 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.842747688s of 12.961668015s, submitted: 21
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x5611197c0000 session 0x56111a5db6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168075264 unmapped: 36888576 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 449 ms_handle_reset con 0x56111a4ed800 session 0x56111b64fc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 449 ms_handle_reset con 0x56111b79d800 session 0x56111b39c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168083456 unmapped: 36880384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a6b2400 session 0x56111b7b1a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111b7a9400 session 0x56111a5dba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x561118d18800 session 0x56111e5cfdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168099840 unmapped: 36864000 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x5611197c0000 session 0x56111990f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168099840 unmapped: 36864000 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2685888 data_alloc: 218103808 data_used: 8398421
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111b79d800 session 0x56111a52b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a6b2400 session 0x56111ae58fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168108032 unmapped: 36855808 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f81a4000/0x0/0x4ffc00000, data 0x172f6af/0x19a3000, compress 0x0/0x0/0x0, omap 0x5dfb1, meta 0x605204f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168140800 unmapped: 41025536 heap: 209166336 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 172466176 unmapped: 40902656 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173572096 unmapped: 39796736 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169443328 unmapped: 43925504 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4162797 data_alloc: 218103808 data_used: 8398421
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174800896 unmapped: 38567936 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a6b3400 session 0x56111e5ce8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x561118d18800 session 0x56111e157180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a4ed800 session 0x56111e5ce380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x5611197c0000 session 0x56111f862a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 heartbeat osd_stat(store_statfs(0x4e61a9000/0x0/0x4ffc00000, data 0x1372f6af/0x139a3000, compress 0x0/0x0/0x0, omap 0x5dfec, meta 0x6052014), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.223875999s of 10.006100655s, submitted: 99
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 42704896 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111b79d800 session 0x561119548a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a4ec800 session 0x56111a52a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 451 ms_handle_reset con 0x561118d18800 session 0x56111b51c000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 451 ms_handle_reset con 0x56111a6b2400 session 0x56111b51d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 451 ms_handle_reset con 0x5611197c0000 session 0x56111990ea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170745856 unmapped: 42622976 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 451 ms_handle_reset con 0x56111a4ed800 session 0x56111b195a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170745856 unmapped: 42622976 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 452 ms_handle_reset con 0x56111b79d800 session 0x56111b194c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170770432 unmapped: 42598400 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 452 ms_handle_reset con 0x561118d18800 session 0x56111990e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 452 ms_handle_reset con 0x5611197c0000 session 0x56111b594540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231869 data_alloc: 218103808 data_used: 8398225
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170786816 unmapped: 42582016 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 453 ms_handle_reset con 0x56111a4ed800 session 0x56111b39dc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 454 heartbeat osd_stat(store_statfs(0x4e61ab000/0x0/0x4ffc00000, data 0x13728502/0x1399d000, compress 0x0/0x0/0x0, omap 0x5e7ee, meta 0x6051812), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 454 ms_handle_reset con 0x56111a6b2400 session 0x561118ca61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 454 ms_handle_reset con 0x5611213e5000 session 0x56111a52aa80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170975232 unmapped: 42393600 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x56111b4a6000 session 0x56111f8621c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x56111b7a4400 session 0x561119549340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x561118d18800 session 0x56111a5dae00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 455 heartbeat osd_stat(store_statfs(0x4e61a7000/0x0/0x4ffc00000, data 0x1372a074/0x1399f000, compress 0x0/0x0/0x0, omap 0x5ecae, meta 0x6051352), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171008000 unmapped: 42360832 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x5611197c0000 session 0x56111f862380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x56111a4ed800 session 0x56111ac5ba40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171008000 unmapped: 42360832 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 455 handle_osd_map epochs [455,456], i have 456, src has [1,456]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 456 ms_handle_reset con 0x561118cfd400 session 0x56111a5dbc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 456 ms_handle_reset con 0x56111a4e9000 session 0x56111b55e000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 456 ms_handle_reset con 0x561118d18800 session 0x561119548700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171057152 unmapped: 42311680 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 456 heartbeat osd_stat(store_statfs(0x4e61b3000/0x0/0x4ffc00000, data 0x137112c7/0x13987000, compress 0x0/0x0/0x0, omap 0x5f143, meta 0x6060ebd), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 456 ms_handle_reset con 0x5611197c0000 session 0x56111ae58000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 457 ms_handle_reset con 0x56111b7a4400 session 0x561119510c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4166254 data_alloc: 218103808 data_used: 4783976
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168443904 unmapped: 44924928 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 458 ms_handle_reset con 0x56111b4a6000 session 0x561118b86c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.774546623s of 10.111104012s, submitted: 152
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 458 ms_handle_reset con 0x561118cfd400 session 0x561119510700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168452096 unmapped: 44916736 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168452096 unmapped: 44916736 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 458 heartbeat osd_stat(store_statfs(0x4e57ed000/0x0/0x4ffc00000, data 0x12f419f9/0x131ba000, compress 0x0/0x0/0x0, omap 0x5f8d9, meta 0x71f0727), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168452096 unmapped: 44916736 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 458 ms_handle_reset con 0x5611197c0000 session 0x56111990e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 458 ms_handle_reset con 0x561118d18800 session 0x56111b595dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168501248 unmapped: 53272576 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4452218 data_alloc: 218103808 data_used: 4788037
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168632320 unmapped: 53141504 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 48766976 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 459 heartbeat osd_stat(store_statfs(0x4ddfef000/0x0/0x4ffc00000, data 0x1a743494/0x1a9bd000, compress 0x0/0x0/0x0, omap 0x5fac6, meta 0x71f053a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173416448 unmapped: 48357376 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173760512 unmapped: 48013312 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 460 heartbeat osd_stat(store_statfs(0x4d87ef000/0x0/0x4ffc00000, data 0x1ff43494/0x201bd000, compress 0x0/0x0/0x0, omap 0x5fac6, meta 0x71f053a), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 178241536 unmapped: 43532288 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5549172 data_alloc: 218103808 data_used: 4788622
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170033152 unmapped: 51740672 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 460 ms_handle_reset con 0x56111a4e9000 session 0x56111981ac40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 460 ms_handle_reset con 0x561118cfd400 session 0x56111b64fdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 460 ms_handle_reset con 0x56111a6b2400 session 0x56111e5ce1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170057728 unmapped: 51716096 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.319390774s of 10.310555458s, submitted: 88
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 461 ms_handle_reset con 0x561118d18800 session 0x56111b7b0000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 461 ms_handle_reset con 0x5611197c0000 session 0x56111b64ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 51675136 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 51666944 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 461 handle_osd_map epochs [461,462], i have 461, src has [1,462]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x56111b4a6000 session 0x561118b86fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x561118cfd400 session 0x56111b51b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170123264 unmapped: 51650560 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 462 heartbeat osd_stat(store_statfs(0x4d4be3000/0x0/0x4ffc00000, data 0x23b48201/0x23dc5000, compress 0x0/0x0/0x0, omap 0x60350, meta 0x71efcb0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5618304 data_alloc: 218103808 data_used: 4788622
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 51642368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 51642368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x561118d18800 session 0x56111f863a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 462 heartbeat osd_stat(store_statfs(0x4d4be4000/0x0/0x4ffc00000, data 0x23b47d0f/0x23dc4000, compress 0x0/0x0/0x0, omap 0x60350, meta 0x71efcb0), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 51642368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x5611197c0000 session 0x56111b7b0a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x56111a6b2400 session 0x561118d896c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 51642368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170139648 unmapped: 51634176 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x5611197c1800 session 0x56111b64e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5625386 data_alloc: 218103808 data_used: 4788622
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 51609600 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x561118cfd400 session 0x56111f862000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170180608 unmapped: 51593216 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 heartbeat osd_stat(store_statfs(0x4d4bde000/0x0/0x4ffc00000, data 0x23b4b346/0x23dca000, compress 0x0/0x0/0x0, omap 0x6096a, meta 0x71ef696), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.685620308s of 10.746030807s, submitted: 36
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x561118d18800 session 0x56111b594540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x5611197c0000 session 0x56111b51c380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170213376 unmapped: 51560448 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x56111cfd0000 session 0x56111b51c000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x56111a6b2400 session 0x56111b51b6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x561118cfd400 session 0x56111b64f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 50888704 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 50888704 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 465 ms_handle_reset con 0x5611197c0000 session 0x56111b39cfc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5677570 data_alloc: 218103808 data_used: 4788622
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170917888 unmapped: 50855936 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 466 ms_handle_reset con 0x56111cfd0000 session 0x561118d89dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 172040192 unmapped: 49733632 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 466 handle_osd_map epochs [466,467], i have 466, src has [1,467]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 467 ms_handle_reset con 0x5611213e4c00 session 0x561119548a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169680896 unmapped: 52092928 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 467 heartbeat osd_stat(store_statfs(0x4d4216000/0x0/0x4ffc00000, data 0x24511b50/0x24794000, compress 0x0/0x0/0x0, omap 0x610d3, meta 0x71eef2d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 467 ms_handle_reset con 0x561118d18800 session 0x56111a5db6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169680896 unmapped: 52092928 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169746432 unmapped: 52027392 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5707447 data_alloc: 218103808 data_used: 4789820
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d4212000/0x0/0x4ffc00000, data 0x24515187/0x2479a000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5707447 data_alloc: 218103808 data_used: 4789820
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d4212000/0x0/0x4ffc00000, data 0x24515187/0x2479a000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.674652100s of 15.477749825s, submitted: 186
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x561118cfd400 session 0x56111e5cea80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x5611197c0000 session 0x56111aaacfc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x56111cfd0000 session 0x56111b39c8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d4212000/0x0/0x4ffc00000, data 0x24515187/0x2479a000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x5611213e4c00 session 0x56111b64f500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x56111af97400 session 0x56111b51d6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5756096 data_alloc: 218103808 data_used: 4789820
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aaf000/0x0/0x4ffc00000, data 0x24c771e9/0x24efd000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x5611197c0000 session 0x56111b64e700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x56111af97400 session 0x561119549c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x56111cfd0000 session 0x56111b595a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169984000 unmapped: 51789824 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169992192 unmapped: 51781632 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5803759 data_alloc: 234881024 data_used: 12329532
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 51109888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 51109888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 51109888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.975400925s of 10.155759811s, submitted: 36
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111ac2a000 session 0x56111e157340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 50970624 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 50970624 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5810534 data_alloc: 234881024 data_used: 12333628
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170819584 unmapped: 50954240 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170819584 unmapped: 50954240 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170819584 unmapped: 50954240 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d3aa6000/0x0/0x4ffc00000, data 0x24c78e2b/0x24f04000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170827776 unmapped: 50946048 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170827776 unmapped: 50946048 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5885364 data_alloc: 234881024 data_used: 12386364
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173604864 unmapped: 48168960 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611197c0000 session 0x561119833a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111af97400 session 0x561118b86000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 45891584 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111cfd0000 session 0x56111b39d880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111b7a1800 session 0x56111b64fc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x561118cfd400 session 0x56111a52b340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174153728 unmapped: 47620096 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d2833000/0x0/0x4ffc00000, data 0x265d5e2b/0x26179000, compress 0x0/0x0/0x0, omap 0x613bb, meta 0x71eec45), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174153728 unmapped: 47620096 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174153728 unmapped: 47620096 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6010721 data_alloc: 234881024 data_used: 13717564
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174227456 unmapped: 47546368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174227456 unmapped: 47546368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.767096519s of 14.260155678s, submitted: 101
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611197c0000 session 0x56111ae58fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27f9000/0x0/0x4ffc00000, data 0x2660fe2b/0x261b3000, compress 0x0/0x0/0x0, omap 0x61093, meta 0x71eef6d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174235648 unmapped: 47538176 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47374336 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 39010304 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6074030 data_alloc: 234881024 data_used: 23831812
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 39010304 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27d9000/0x0/0x4ffc00000, data 0x2662fe2b/0x261d3000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 38977536 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27d9000/0x0/0x4ffc00000, data 0x2662fe2b/0x261d3000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182419456 unmapped: 39354368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182419456 unmapped: 39354368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 39231488 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27cf000/0x0/0x4ffc00000, data 0x26639e2b/0x261dd000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6074430 data_alloc: 234881024 data_used: 23831812
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 39231488 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 39231488 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27cf000/0x0/0x4ffc00000, data 0x26639e2b/0x261dd000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182575104 unmapped: 39198720 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.970912933s of 10.996688843s, submitted: 11
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27cf000/0x0/0x4ffc00000, data 0x26639e2b/0x261dd000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [2,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 32940032 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188956672 unmapped: 32817152 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6150228 data_alloc: 234881024 data_used: 24642308
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189841408 unmapped: 31932416 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189841408 unmapped: 31932416 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189841408 unmapped: 31932416 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611213e4c00 session 0x56111b64e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611197df000 session 0x56111f863500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111a4ed800 session 0x56111e5ce000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d0acb000/0x0/0x4ffc00000, data 0x2719de2b/0x26d41000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x838ed63), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189865984 unmapped: 31907840 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111a4ed800 session 0x56111b1941c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186572800 unmapped: 35201024 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d11ae000/0x0/0x4ffc00000, data 0x26a3bd96/0x265dc000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x838ed63), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6060219 data_alloc: 234881024 data_used: 17334020
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186572800 unmapped: 35201024 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d11ae000/0x0/0x4ffc00000, data 0x26a3bd96/0x265dc000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x838ed63), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d11ae000/0x0/0x4ffc00000, data 0x26a3bd96/0x265dc000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x838ed63), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 35061760 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611197c0000 session 0x56111ab3ec40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 469 handle_osd_map epochs [469,470], i have 470, src has [1,470]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192995328 unmapped: 28778496 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 470 ms_handle_reset con 0x5611197df000 session 0x561118ca6a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.291953087s of 10.047514915s, submitted: 217
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 471 ms_handle_reset con 0x5611213e4c00 session 0x561118b86380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 471 ms_handle_reset con 0x561118cfd400 session 0x561119511dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189669376 unmapped: 32104448 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189415424 unmapped: 32358400 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 472 ms_handle_reset con 0x5611197c0000 session 0x561118ca6e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6152533 data_alloc: 234881024 data_used: 17342095
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189415424 unmapped: 32358400 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189431808 unmapped: 32342016 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 472 heartbeat osd_stat(store_statfs(0x4d091e000/0x0/0x4ffc00000, data 0x276220be/0x26eec000, compress 0x0/0x0/0x0, omap 0x61ddb, meta 0x838e225), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 heartbeat osd_stat(store_statfs(0x4d0920000/0x0/0x4ffc00000, data 0x276220be/0x26eec000, compress 0x0/0x0/0x0, omap 0x61ddb, meta 0x838e225), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 33267712 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 heartbeat osd_stat(store_statfs(0x4d0920000/0x0/0x4ffc00000, data 0x276220be/0x26eec000, compress 0x0/0x0/0x0, omap 0x61ddb, meta 0x838e225), peers [0,1] op hist [0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x56111a4ed800 session 0x56111b39ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188571648 unmapped: 33202176 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x5611197df000 session 0x56111e5ce8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188571648 unmapped: 33202176 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x56111af97400 session 0x56111f862c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x56111cfd0000 session 0x56111b55f880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x5611213e4c00 session 0x56111e5ce700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5908357 data_alloc: 218103808 data_used: 7529517
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x5611197c0000 session 0x561118d88000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x5611197df000 session 0x5611187a7500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.231451988s of 10.000329971s, submitted: 103
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x56111af97400 session 0x56111b64fa40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x56111a4ed800 session 0x561118d88380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 heartbeat osd_stat(store_statfs(0x4d2a89000/0x0/0x4ffc00000, data 0x251d9824/0x24d81000, compress 0x0/0x0/0x0, omap 0x6189d, meta 0x838e763), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x56111af97400 session 0x56111b39c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5880052 data_alloc: 218103808 data_used: 6124688
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 211329024 unmapped: 23044096 heap: 234373120 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 27205632 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 heartbeat osd_stat(store_statfs(0x4cf68b000/0x0/0x4ffc00000, data 0x285d9876/0x28181000, compress 0x0/0x0/0x0, omap 0x61929, meta 0x838e6d7), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186286080 unmapped: 77479936 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 heartbeat osd_stat(store_statfs(0x4cf68b000/0x0/0x4ffc00000, data 0x285d9876/0x28181000, compress 0x0/0x0/0x0, omap 0x61929, meta 0x838e6d7), peers [0,1] op hist [0,0,1,0,1,0,0,3])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x5611213e4c00 session 0x56111aaac000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183361536 unmapped: 80404480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x56111984d000 session 0x56111b195dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 474 handle_osd_map epochs [474,475], i have 475, src has [1,475]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184655872 unmapped: 79110144 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6693238 data_alloc: 218103808 data_used: 6125158
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 74571776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 heartbeat osd_stat(store_statfs(0x4c9e87000/0x0/0x4ffc00000, data 0x2d9db292/0x2d583000, compress 0x0/0x0/0x0, omap 0x61cc5, meta 0x838e33b), peers [0,1] op hist [0,0,0,2,0,0,0,0,2])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189497344 unmapped: 74268672 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189743104 unmapped: 74022912 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.498979330s of 10.035424232s, submitted: 270
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 heartbeat osd_stat(store_statfs(0x4c5289000/0x0/0x4ffc00000, data 0x329db292/0x32583000, compress 0x0/0x0/0x0, omap 0x61cc5, meta 0x838e33b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 76890112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 72253440 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197df000 session 0x561118d89a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197be800 session 0x56111ae58fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197c0000 session 0x56111990f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197df000 session 0x56111b39da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x56111984d000 session 0x56111b51c700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7576446 data_alloc: 218103808 data_used: 6125158
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187465728 unmapped: 76300288 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 heartbeat osd_stat(store_statfs(0x4bee88000/0x0/0x4ffc00000, data 0x38ddb77c/0x38984000, compress 0x0/0x0/0x0, omap 0x61cc5, meta 0x838e33b), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x56111af97400 session 0x56111b7b0700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611213e4c00 session 0x56111b51ce00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187613184 unmapped: 76152832 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197c0000 session 0x56111a52a1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197df000 session 0x56111ae58c40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x56111984d000 session 0x56111a52a8c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187621376 unmapped: 76144640 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 476 ms_handle_reset con 0x56111af97400 session 0x56111990e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 476 heartbeat osd_stat(store_statfs(0x4d1289000/0x0/0x4ffc00000, data 0x251db220/0x24d81000, compress 0x0/0x0/0x0, omap 0x61d51, meta 0x838e2af), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187637760 unmapped: 76128256 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 476 ms_handle_reset con 0x56111984c400 session 0x56111e5cf500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 476 ms_handle_reset con 0x5611197c0000 session 0x5611187a6000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187637760 unmapped: 76128256 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5972731 data_alloc: 218103808 data_used: 6129156
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 476 heartbeat osd_stat(store_statfs(0x4d2a89000/0x0/0x4ffc00000, data 0x251dce00/0x24d83000, compress 0x0/0x0/0x0, omap 0x61e23, meta 0x838e1dd), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187637760 unmapped: 76128256 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 476 handle_osd_map epochs [476,477], i have 477, src has [1,477]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 477 heartbeat osd_stat(store_statfs(0x4d2a89000/0x0/0x4ffc00000, data 0x251dce00/0x24d83000, compress 0x0/0x0/0x0, omap 0x61e23, meta 0x838e1dd), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 76906496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 478 ms_handle_reset con 0x5611197df000 session 0x561118ca61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 478 ms_handle_reset con 0x56111984d000 session 0x56111b7b01c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 79134720 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 478 heartbeat osd_stat(store_statfs(0x4d3a16000/0x0/0x4ffc00000, data 0x23b63600/0x23df4000, compress 0x0/0x0/0x0, omap 0x62ca7, meta 0x838d359), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 79134720 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 478 handle_osd_map epochs [478,479], i have 478, src has [1,479]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.502179146s of 11.026341438s, submitted: 248
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 79134720 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 480 ms_handle_reset con 0x56111af97400 session 0x56111a52bdc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459256 data_alloc: 218103808 data_used: 4740612
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 480 heartbeat osd_stat(store_statfs(0x4d5e10000/0x0/0x4ffc00000, data 0x21366e20/0x215fa000, compress 0x0/0x0/0x0, omap 0x62ca7, meta 0x838d359), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184115200 unmapped: 79650816 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 480 ms_handle_reset con 0x5611197c0800 session 0x56111e5cf6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 480 heartbeat osd_stat(store_statfs(0x4d5e10000/0x0/0x4ffc00000, data 0x21366e20/0x215fa000, compress 0x0/0x0/0x0, omap 0x62ca7, meta 0x838d359), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 480 handle_osd_map epochs [481,481], i have 481, src has [1,481]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 481 ms_handle_reset con 0x5611197c0000 session 0x56111b7b0540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f660c000/0x0/0x4ffc00000, data 0xf6a4aa/0x11fe000, compress 0x0/0x0/0x0, omap 0x630b7, meta 0x838cf49), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002299 data_alloc: 218103808 data_used: 4742436
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f660c000/0x0/0x4ffc00000, data 0xf6a4aa/0x11fe000, compress 0x0/0x0/0x0, omap 0x630b7, meta 0x838cf49), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 483 ms_handle_reset con 0x5611197df000 session 0x56111b7b1880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.828944206s of 11.260063171s, submitted: 204
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006853 data_alloc: 218103808 data_used: 4742436
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182255616 unmapped: 81510400 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x56111984d000 session 0x56111ab0afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6609000/0x0/0x4ffc00000, data 0xf6bf49/0x1201000, compress 0x0/0x0/0x0, omap 0x62827, meta 0x838d7d9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x56111af97400 session 0x56111aaac1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 81494016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 81494016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x5611197bec00 session 0x56111b7b1c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x5611197c0000 session 0x56111b39c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6604000/0x0/0x4ffc00000, data 0xf6dae5/0x1204000, compress 0x0/0x0/0x0, omap 0x624ff, meta 0x838db01), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 190660608 unmapped: 73105408 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6604000/0x0/0x4ffc00000, data 0xf6dae5/0x1204000, compress 0x0/0x0/0x0, omap 0x624ff, meta 0x838db01), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x5611197df000 session 0x56111ac5afc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x56111984d000 session 0x56111f862fc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182288384 unmapped: 81477632 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 485 ms_handle_reset con 0x56111af97400 session 0x561118d88540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3208904 data_alloc: 218103808 data_used: 4742436
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182296576 unmapped: 81469440 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 485 ms_handle_reset con 0x56111ad52c00 session 0x56111ae59c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182296576 unmapped: 81469440 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 485 ms_handle_reset con 0x5611197c0000 session 0x56111b39d340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 81444864 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 486 ms_handle_reset con 0x5611197df000 session 0x56111f863c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 486 ms_handle_reset con 0x56111984d000 session 0x5611195496c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f4200000/0x0/0x4ffc00000, data 0x3371271/0x360a000, compress 0x0/0x0/0x0, omap 0x611a3, meta 0x838ee5d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 486 ms_handle_reset con 0x56111af97400 session 0x56111b51c540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182329344 unmapped: 81436672 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182329344 unmapped: 81436672 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.667217255s of 10.084074020s, submitted: 81
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 486 ms_handle_reset con 0x5611197df400 session 0x56111ae59dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3211305 data_alloc: 218103808 data_used: 4742436
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182337536 unmapped: 81428480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x5611197c0000 session 0x561119511500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182337536 unmapped: 81428480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x5611197df000 session 0x56111b51a700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x56111984d000 session 0x561118b876c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x56111af97400 session 0x56111990e380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f41fd000/0x0/0x4ffc00000, data 0x3372e61/0x360d000, compress 0x0/0x0/0x0, omap 0x5feb3, meta 0x839014d), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 81420288 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x56111b7a2c00 session 0x56111aaac1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 81420288 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 487 handle_osd_map epochs [487,488], i have 488, src has [1,488]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 488 ms_handle_reset con 0x5611197c0000 session 0x56111e156e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 488 ms_handle_reset con 0x5611197df000 session 0x56111e157180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 81092608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3222400 data_alloc: 218103808 data_used: 4743049
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 81092608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f41d5000/0x0/0x4ffc00000, data 0x339891f/0x3635000, compress 0x0/0x0/0x0, omap 0x65467, meta 0x838ab99), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 488 ms_handle_reset con 0x56111af3d000 session 0x56111b7b0540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 81092608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 81092608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183164928 unmapped: 80601088 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f41d4000/0x0/0x4ffc00000, data 0x3398981/0x3636000, compress 0x0/0x0/0x0, omap 0x65467, meta 0x838ab99), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 489 ms_handle_reset con 0x56111b193800 session 0x56111b51b180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183173120 unmapped: 80592896 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 490 ms_handle_reset con 0x56111b109000 session 0x56111b51b500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291466 data_alloc: 234881024 data_used: 13779755
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183205888 unmapped: 80560128 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.358857155s of 10.488816261s, submitted: 78
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 491 ms_handle_reset con 0x561119817000 session 0x56111990e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 491 ms_handle_reset con 0x5611197c0000 session 0x56111990fc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 491 ms_handle_reset con 0x56111b79bc00 session 0x561119832e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183238656 unmapped: 80527360 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 492 ms_handle_reset con 0x5611197df000 session 0x561119548e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 492 ms_handle_reset con 0x56111af3d000 session 0x56111990f6c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f41c0000/0x0/0x4ffc00000, data 0x339f78a/0x3643000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 80494592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 80494592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 492 handle_osd_map epochs [492,493], i have 493, src has [1,493]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 493 ms_handle_reset con 0x56111af3d000 session 0x56111b51d880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 493 ms_handle_reset con 0x5611197c0000 session 0x561119511880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 80470016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296870 data_alloc: 234881024 data_used: 13780741
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 80470016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f41c6000/0x0/0x4ffc00000, data 0x33a12d2/0x3644000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 80470016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 69648384 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196657152 unmapped: 67108864 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191553536 unmapped: 72212480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363414 data_alloc: 234881024 data_used: 13851397
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3800000/0x0/0x4ffc00000, data 0x3d392d2/0x3fdc000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191553536 unmapped: 72212480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191553536 unmapped: 72212480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3800000/0x0/0x4ffc00000, data 0x3d392d2/0x3fdc000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.044831276s of 11.408596992s, submitted: 134
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 493 ms_handle_reset con 0x5611197df000 session 0x5611187a6a80
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191578112 unmapped: 72187904 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 493 handle_osd_map epochs [493,494], i have 493, src has [1,494]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f3800000/0x0/0x4ffc00000, data 0x3d392d2/0x3fdc000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 494 ms_handle_reset con 0x56111b79bc00 session 0x561118d88380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191578112 unmapped: 72187904 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 495 ms_handle_reset con 0x561119817000 session 0x56111b594e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191578112 unmapped: 72187904 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 495 handle_osd_map epochs [495,496], i have 495, src has [1,496]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 496 ms_handle_reset con 0x561119817000 session 0x56111b595dc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370888 data_alloc: 234881024 data_used: 13851397
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191660032 unmapped: 72105984 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191668224 unmapped: 72097792 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f3825000/0x0/0x4ffc00000, data 0x3d3e6a2/0x3fe5000, compress 0x0/0x0/0x0, omap 0x661fd, meta 0x8389e03), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191791104 unmapped: 71974912 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 497 ms_handle_reset con 0x56111984d000 session 0x5611187a61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 497 ms_handle_reset con 0x56111af97400 session 0x56111b7b1880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 497 ms_handle_reset con 0x5611197c0000 session 0x56111a52a1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191127552 unmapped: 72638464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f3848000/0x0/0x4ffc00000, data 0x3d1c28b/0x3fc3000, compress 0x0/0x0/0x0, omap 0x66407, meta 0x8389bf9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191127552 unmapped: 72638464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363146 data_alloc: 234881024 data_used: 13742853
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191127552 unmapped: 72638464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191127552 unmapped: 72638464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f3848000/0x0/0x4ffc00000, data 0x3d1c28b/0x3fc3000, compress 0x0/0x0/0x0, omap 0x66407, meta 0x8389bf9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.998230934s of 10.173355103s, submitted: 115
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191135744 unmapped: 72630272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 498 ms_handle_reset con 0x5611197df000 session 0x561118b87a40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 498 handle_osd_map epochs [498,499], i have 498, src has [1,499]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191168512 unmapped: 72597504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 499 heartbeat osd_stat(store_statfs(0x4f3849000/0x0/0x4ffc00000, data 0x3d1c28b/0x3fc3000, compress 0x0/0x0/0x0, omap 0x66407, meta 0x8389bf9), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 499 ms_handle_reset con 0x5611197c0000 session 0x5611195496c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 499 ms_handle_reset con 0x561119817000 session 0x56111990e540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191168512 unmapped: 72597504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 499 heartbeat osd_stat(store_statfs(0x4f3840000/0x0/0x4ffc00000, data 0x3d1f998/0x3fcc000, compress 0x0/0x0/0x0, omap 0x66ce0, meta 0x8389320), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 499 handle_osd_map epochs [499,500], i have 499, src has [1,500]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 500 ms_handle_reset con 0x56111af97400 session 0x56111b39c1c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f3840000/0x0/0x4ffc00000, data 0x3d1f998/0x3fcc000, compress 0x0/0x0/0x0, omap 0x66ce0, meta 0x8389320), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379076 data_alloc: 234881024 data_used: 13747499
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191176704 unmapped: 72589312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 501 ms_handle_reset con 0x56111984d000 session 0x56111b594e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 501 ms_handle_reset con 0x56111af3d000 session 0x56111f862700
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191176704 unmapped: 72589312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191184896 unmapped: 72581120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f3835000/0x0/0x4ffc00000, data 0x3d235c4/0x3fd3000, compress 0x0/0x0/0x0, omap 0x670bc, meta 0x8388f44), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191184896 unmapped: 72581120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 501 ms_handle_reset con 0x5611197c0000 session 0x561119832380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191184896 unmapped: 72581120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 502 ms_handle_reset con 0x561119817000 session 0x56111b51a380
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387813 data_alloc: 234881024 data_used: 13748100
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191201280 unmapped: 72564736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 502 ms_handle_reset con 0x56111984d000 session 0x56111f862e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191201280 unmapped: 72564736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f3833000/0x0/0x4ffc00000, data 0x3d25170/0x3fd7000, compress 0x0/0x0/0x0, omap 0x6714a, meta 0x8388eb6), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191201280 unmapped: 72564736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.963652611s of 11.022150993s, submitted: 45
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191217664 unmapped: 72548352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 504 ms_handle_reset con 0x56111af97400 session 0x56111f863880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191225856 unmapped: 72540160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 504 heartbeat osd_stat(store_statfs(0x4f382d000/0x0/0x4ffc00000, data 0x3d288fc/0x3fdd000, compress 0x0/0x0/0x0, omap 0x66a7a, meta 0x8389586), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405418 data_alloc: 234881024 data_used: 13748799
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191225856 unmapped: 72540160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191225856 unmapped: 72540160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 504 ms_handle_reset con 0x56111b193800 session 0x56111b7b1c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191250432 unmapped: 72515584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 504 handle_osd_map epochs [504,505], i have 504, src has [1,505]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 505 ms_handle_reset con 0x5611197c0000 session 0x56111b594540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191275008 unmapped: 72491008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 505 ms_handle_reset con 0x56111b193800 session 0x56111ab3ee00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 506 ms_handle_reset con 0x561119817000 session 0x56111a52a000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 506 heartbeat osd_stat(store_statfs(0x4f382a000/0x0/0x4ffc00000, data 0x3d2a4b4/0x3fe0000, compress 0x0/0x0/0x0, omap 0x66b08, meta 0x83894f8), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191275008 unmapped: 72491008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412970 data_alloc: 234881024 data_used: 14301320
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191275008 unmapped: 72491008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191283200 unmapped: 72482816 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 507 handle_osd_map epochs [507,508], i have 507, src has [1,508]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 508 ms_handle_reset con 0x56111984d000 session 0x56111b64f180
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 508 ms_handle_reset con 0x56111af97400 session 0x56111b64fc00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192069632 unmapped: 71696384 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f3821000/0x0/0x4ffc00000, data 0x3d2f868/0x3fe9000, compress 0x0/0x0/0x0, omap 0x66efe, meta 0x8389102), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f3821000/0x0/0x4ffc00000, data 0x3d2f868/0x3fe9000, compress 0x0/0x0/0x0, omap 0x66efe, meta 0x8389102), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 508 handle_osd_map epochs [509,509], i have 508, src has [1,509]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 508 handle_osd_map epochs [508,509], i have 509, src has [1,509]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.261071205s of 10.357838631s, submitted: 43
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192086016 unmapped: 71680000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 509 ms_handle_reset con 0x5611197c0000 session 0x56111b7b0e00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192086016 unmapped: 71680000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420553 data_alloc: 234881024 data_used: 14301320
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 509 handle_osd_map epochs [509,510], i have 509, src has [1,510]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 510 heartbeat osd_stat(store_statfs(0x4f381d000/0x0/0x4ffc00000, data 0x3d3149c/0x3feb000, compress 0x0/0x0/0x0, omap 0x673bf, meta 0x8388c41), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192094208 unmapped: 71671808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 510 ms_handle_reset con 0x561119817000 session 0x56111b51da40
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 510 ms_handle_reset con 0x56111984d000 session 0x56111b51b880
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192102400 unmapped: 71663616 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 194969600 unmapped: 68796416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 68599808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 68542464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471954 data_alloc: 234881024 data_used: 19795778
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 68542464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f3418000/0x0/0x4ffc00000, data 0x4136277/0x43f2000, compress 0x0/0x0/0x0, omap 0x67b2b, meta 0x83884d5), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 68534272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 68534272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.824200630s of 10.004947662s, submitted: 92
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 67502080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 67502080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497044 data_alloc: 234881024 data_used: 19795778
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497188 data_alloc: 234881024 data_used: 19795778
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532516 data_alloc: 234881024 data_used: 23859010
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f2b10000/0x0/0x4ffc00000, data 0x4a3ed2e/0x4cfc000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f2b10000/0x0/0x4ffc00000, data 0x4a3ed2e/0x4cfc000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532516 data_alloc: 234881024 data_used: 23859010
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f2b10000/0x0/0x4ffc00000, data 0x4a3ed2e/0x4cfc000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200409088 unmapped: 63356928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.329088211s of 20.369134903s, submitted: 28
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111b79bc00 session 0x56111b51ddc0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111b109000 session 0x5611187a61c0
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x5611197c0000 session 0x561119511340
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f2b10000/0x0/0x4ffc00000, data 0x4a3ed2e/0x4cfc000, compress 0x0/0x0/0x0, omap 0x671a1, meta 0x8388e5f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536172 data_alloc: 234881024 data_used: 25030466
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x561119817000 session 0x561118a3d500
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111984d000 session 0x56111e156540
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3818000/0x0/0x4ffc00000, data 0x3d37d1e/0x3ff4000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455560 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111b79bc00 session 0x56111e156000
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111b193800 session 0x5611187a7c00
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200212480 unmapped: 63553536 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200212480 unmapped: 63553536 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200212480 unmapped: 63553536 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: mgrc ms_handle_reset ms_handle_reset con 0x56111aedf400
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2811058765
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2811058765,v1:192.168.122.100:6801/2811058765]
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: mgrc handle_mgr_configure stats_period=5
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200433664 unmapped: 63332352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200433664 unmapped: 63332352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200450048 unmapped: 63315968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200450048 unmapped: 63315968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'config diff' '{prefix=config diff}'
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'config show' '{prefix=config show}'
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200728576 unmapped: 63037440 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200876032 unmapped: 62889984 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:51 np0005603435 ceph-osd[87920]: do_command 'log dump' '{prefix=log dump}'
Jan 31 00:04:52 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19156 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 31 00:04:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055313949' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 00:04:52 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19158 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} v 0)
Jan 31 00:04:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} : dispatch
Jan 31 00:04:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 00:04:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4203488097' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 00:04:52 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19162 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:52 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} v 0)
Jan 31 00:04:52 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} : dispatch
Jan 31 00:04:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 00:04:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171739620' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 00:04:53 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19166 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 00:04:53 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/706580538' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 00:04:53 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:53 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19170 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:53 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:04:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 00:04:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3642147071' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 00:04:54 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19174 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:04:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 00:04:54 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376492193' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 00:04:54 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19178 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 00:04:55 np0005603435 nova_compute[239938]: 2026-01-31 05:04:55.025 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:04:55 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 31 00:04:55 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2282365232' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 00:04:55 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19182 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 00:04:55 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19186 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 00:04:55 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:04:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:04:55.928 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:04:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:04:55.928 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:04:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:04:55.928 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91054080 unmapped: 19046400 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 157 ms_handle_reset con 0x55b81e0a5c00 session 0x55b81d230a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.696501732s of 10.001713753s, submitted: 38
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 157 ms_handle_reset con 0x55b81b7d2800 session 0x55b81ac2b880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228316 data_alloc: 218103808 data_used: 13575
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0x1163cc8/0x1256000, compress 0x0/0x0/0x0, omap 0x1c68c, meta 0x2bb3974), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228316 data_alloc: 218103808 data_used: 13575
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0x1163cc8/0x1256000, compress 0x0/0x0/0x0, omap 0x1c68c, meta 0x2bb3974), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0x1163cc8/0x1256000, compress 0x0/0x0/0x0, omap 0x1c68c, meta 0x2bb3974), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228316 data_alloc: 218103808 data_used: 13575
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fbdd6000/0x0/0x4ffc00000, data 0x1163cc8/0x1256000, compress 0x0/0x0/0x0, omap 0x1c68c, meta 0x2bb3974), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228316 data_alloc: 218103808 data_used: 13575
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.386837006s of 18.514137268s, submitted: 10
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 157 ms_handle_reset con 0x55b81ceb5800 session 0x55b81db07a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90890240 unmapped: 19210240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fbdd5000/0x0/0x4ffc00000, data 0x1163d2a/0x1257000, compress 0x0/0x0/0x0, omap 0x1c68c, meta 0x2bb3974), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 19202048 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 158 ms_handle_reset con 0x55b81de8b000 session 0x55b81ced48c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 19202048 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233515 data_alloc: 218103808 data_used: 13673
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 19202048 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 158 heartbeat osd_stat(store_statfs(0x4fbdd0000/0x0/0x4ffc00000, data 0x11658c6/0x125a000, compress 0x0/0x0/0x0, omap 0x1c911, meta 0x2bb36ef), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 158 ms_handle_reset con 0x55b81de8b800 session 0x55b81b887180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 19202048 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 159 ms_handle_reset con 0x55b81d42c800 session 0x55b81df05880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 19202048 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fbdcd000/0x0/0x4ffc00000, data 0x11674b6/0x125d000, compress 0x0/0x0/0x0, omap 0x1cb98, meta 0x2bb3468), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 19202048 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 159 ms_handle_reset con 0x55b81b7d2800 session 0x55b81db06700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fbdcf000/0x0/0x4ffc00000, data 0x11674b6/0x125d000, compress 0x0/0x0/0x0, omap 0x1cb98, meta 0x2bb3468), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fbdcf000/0x0/0x4ffc00000, data 0x11674b6/0x125d000, compress 0x0/0x0/0x0, omap 0x1cb98, meta 0x2bb3468), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234829 data_alloc: 218103808 data_used: 13575
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 159 ms_handle_reset con 0x55b81ceb5800 session 0x55b81df2ddc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fbdd0000/0x0/0x4ffc00000, data 0x1167454/0x125c000, compress 0x0/0x0/0x0, omap 0x1cb98, meta 0x2bb3468), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234829 data_alloc: 218103808 data_used: 13575
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.912381172s of 15.010829926s, submitted: 29
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81de8b000 session 0x55b81dc3e380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0x1168ed3/0x125f000, compress 0x0/0x0/0x0, omap 0x1ce21, meta 0x2bb31df), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 18153472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81de8b800 session 0x55b81df04540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81b94d800 session 0x55b81dc068c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 18145280 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d231500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81ceb5800 session 0x55b81d4f21c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242808 data_alloc: 218103808 data_used: 13575
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 18128896 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81de8b000 session 0x55b81b880540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 18128896 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0x1168f97/0x1261000, compress 0x0/0x0/0x0, omap 0x1cea9, meta 0x2bb3157), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81de8b800 session 0x55b81d4f3340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81de88000 session 0x55b81ced4540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 18128896 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 18128896 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 ms_handle_reset con 0x55b81de8b400 session 0x55b81d4f2c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 18128896 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248895 data_alloc: 218103808 data_used: 13575
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 161 ms_handle_reset con 0x55b81de8b800 session 0x55b81b886540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92258304 unmapped: 17842176 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 161 ms_handle_reset con 0x55b81de88800 session 0x55b81df2d340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 161 ms_handle_reset con 0x55b81d49d000 session 0x55b81b38a8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 162 ms_handle_reset con 0x55b81ceb5800 session 0x55b81cb936c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.888108253s of 10.001209259s, submitted: 72
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 162 ms_handle_reset con 0x55b81de8b000 session 0x55b81dc3f880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92692480 unmapped: 17408000 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 162 ms_handle_reset con 0x55b81de88400 session 0x55b81df63c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 162 ms_handle_reset con 0x55b81d49d800 session 0x55b81b8861c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 163 ms_handle_reset con 0x55b81d49d000 session 0x55b81db07500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 17448960 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 163 heartbeat osd_stat(store_statfs(0x4fbdbe000/0x0/0x4ffc00000, data 0x116e34d/0x126c000, compress 0x0/0x0/0x0, omap 0x1dc60, meta 0x2bb23a0), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81de88800 session 0x55b81df2c1c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81ceb5800 session 0x55b81b8816c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 17416192 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81b7d2800 session 0x55b81df69dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81d49d000 session 0x55b81cb93dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 17432576 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81d49d800 session 0x55b81d714fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262323 data_alloc: 218103808 data_used: 13689
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 17432576 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92667904 unmapped: 17432576 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81de8b000 session 0x55b81dc06540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 17711104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81de88400 session 0x55b81d094e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 17711104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 heartbeat osd_stat(store_statfs(0x4fbdbd000/0x0/0x4ffc00000, data 0x116ff05/0x126f000, compress 0x0/0x0/0x0, omap 0x1def9, meta 0x2bb2107), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81ceb5800 session 0x55b81ac04700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 ms_handle_reset con 0x55b81d49d000 session 0x55b81dc061c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92389376 unmapped: 17711104 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266211 data_alloc: 218103808 data_used: 13787
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 165 ms_handle_reset con 0x55b81d49d800 session 0x55b81db06000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 165 ms_handle_reset con 0x55b81de8b400 session 0x55b81d1d9dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 17694720 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 165 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d2308c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.251860619s of 10.234457016s, submitted: 40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92405760 unmapped: 17694720 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 166 ms_handle_reset con 0x55b81d49d000 session 0x55b81d4f2540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 166 ms_handle_reset con 0x55b81d49d800 session 0x55b81cb93a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 166 ms_handle_reset con 0x55b81de8b400 session 0x55b81d4f3180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 17678336 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92422144 unmapped: 17678336 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 167 ms_handle_reset con 0x55b81ceb5800 session 0x55b8190f3180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 167 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d3716c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fbdb5000/0x0/0x4ffc00000, data 0x11750ca/0x1277000, compress 0x0/0x0/0x0, omap 0x1e758, meta 0x2bb18a8), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 17793024 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273513 data_alloc: 218103808 data_used: 14574
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 168 ms_handle_reset con 0x55b81ceb5800 session 0x55b81d095dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 17793024 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 168 ms_handle_reset con 0x55b81d49d000 session 0x55b81d094fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 168 ms_handle_reset con 0x55b81de88400 session 0x55b81d371500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 17793024 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92307456 unmapped: 17793024 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 169 ms_handle_reset con 0x55b81d49d800 session 0x55b81df2c000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 169 ms_handle_reset con 0x55b81b7d2800 session 0x55b81ced5340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 17940480 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 17940480 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 169 ms_handle_reset con 0x55b81de8b800 session 0x55b81db06fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 169 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d4f2fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274308 data_alloc: 218103808 data_used: 14460
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 169 heartbeat osd_stat(store_statfs(0x4fbdb0000/0x0/0x4ffc00000, data 0x117880e/0x127a000, compress 0x0/0x0/0x0, omap 0x1fd0c, meta 0x2bb02f4), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 17940480 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 17940480 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.906607628s of 10.566824913s, submitted: 146
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 170 ms_handle_reset con 0x55b81d242c00 session 0x55b81d2316c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92160000 unmapped: 17940480 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 171 ms_handle_reset con 0x55b81b889800 session 0x55b81dc3e000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 171 ms_handle_reset con 0x55b81b7d5400 session 0x55b81dc06a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 171 heartbeat osd_stat(store_statfs(0x4fbda7000/0x0/0x4ffc00000, data 0x117bf6d/0x1283000, compress 0x0/0x0/0x0, omap 0x200e3, meta 0x2baff1d), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91865088 unmapped: 18235392 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 171 ms_handle_reset con 0x55b81b7d2800 session 0x55b81dddc1c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 172 ms_handle_reset con 0x55b81d243800 session 0x55b81df62380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 91865088 unmapped: 18235392 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 172 ms_handle_reset con 0x55b81d242c00 session 0x55b81dddd500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287575 data_alloc: 218103808 data_used: 14874
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 18087936 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 172 ms_handle_reset con 0x55b81d34e400 session 0x55b81da2e380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 173 ms_handle_reset con 0x55b81e0e0800 session 0x55b81b4f21c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 174 ms_handle_reset con 0x55b81de8b800 session 0x55b81d3921c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92143616 unmapped: 17956864 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 174 ms_handle_reset con 0x55b81b7d2800 session 0x55b81b887dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 174 ms_handle_reset con 0x55b81d243800 session 0x55b81d714540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 174 ms_handle_reset con 0x55b81b7d5400 session 0x55b81df696c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 174 ms_handle_reset con 0x55b81d34e400 session 0x55b81da39340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92143616 unmapped: 17956864 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 175 ms_handle_reset con 0x55b81d242c00 session 0x55b81d4f28c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 175 ms_handle_reset con 0x55b81b7d2800 session 0x55b81df05a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 175 heartbeat osd_stat(store_statfs(0x4fbd9e000/0x0/0x4ffc00000, data 0x11812b1/0x1289000, compress 0x0/0x0/0x0, omap 0x208e1, meta 0x2baf71f), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92143616 unmapped: 17956864 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92176384 unmapped: 17924096 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 175 heartbeat osd_stat(store_statfs(0x4fbd99000/0x0/0x4ffc00000, data 0x1182ebd/0x128c000, compress 0x0/0x0/0x0, omap 0x20b8f, meta 0x2baf471), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295004 data_alloc: 218103808 data_used: 14760
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92176384 unmapped: 17924096 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92176384 unmapped: 17924096 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92176384 unmapped: 17924096 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92176384 unmapped: 17924096 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92176384 unmapped: 17924096 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295004 data_alloc: 218103808 data_used: 14760
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.197317123s of 13.465264320s, submitted: 108
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 175 ms_handle_reset con 0x55b81d243800 session 0x55b8190f3c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 175 ms_handle_reset con 0x55b81de8b800 session 0x55b81b4f2700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 17129472 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 heartbeat osd_stat(store_statfs(0x4fabff000/0x0/0x4ffc00000, data 0x1182f1f/0x128d000, compress 0x0/0x0/0x0, omap 0x20b8f, meta 0x3d4f471), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 94019584 unmapped: 16080896 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 heartbeat osd_stat(store_statfs(0x4fabff000/0x0/0x4ffc00000, data 0x1182f1f/0x128d000, compress 0x0/0x0/0x0, omap 0x20b8f, meta 0x3d4f471), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d4a3dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d243800 session 0x55b81b7ab340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d242c00 session 0x55b81d4a2000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d34e400 session 0x55b81d392540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81e0e0800 session 0x55b81db06540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102039552 unmapped: 8060928 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81b7d2800 session 0x55b81da2f6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102039552 unmapped: 8060928 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102039552 unmapped: 8060928 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 heartbeat osd_stat(store_statfs(0x4fabfa000/0x0/0x4ffc00000, data 0x1184974/0x128f000, compress 0x0/0x0/0x0, omap 0x20d33, meta 0x3d4f2cd), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314300 data_alloc: 218103808 data_used: 6830391
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102039552 unmapped: 8060928 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d242c00 session 0x55b81df2cc40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 101908480 unmapped: 8192000 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 heartbeat osd_stat(store_statfs(0x4fabfc000/0x0/0x4ffc00000, data 0x11849d6/0x1290000, compress 0x0/0x0/0x0, omap 0x20d33, meta 0x3d4f2cd), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 101908480 unmapped: 8192000 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d243800 session 0x55b81d715500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d34e000 session 0x55b81cef0700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d34e400 session 0x55b81cef1880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 105684992 unmapped: 4415488 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81b7d2800 session 0x55b81df04c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d242c00 session 0x55b81caed180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 7561216 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d34e000 session 0x55b81d4f2a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.663596153s of 10.003048897s, submitted: 106
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 176 ms_handle_reset con 0x55b81d34e400 session 0x55b81dddd340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320382 data_alloc: 218103808 data_used: 6764874
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 177 ms_handle_reset con 0x55b81d243800 session 0x55b81de1a540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 7602176 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 177 heartbeat osd_stat(store_statfs(0x4fabf7000/0x0/0x4ffc00000, data 0x1186510/0x1292000, compress 0x0/0x0/0x0, omap 0x20fe5, meta 0x3d4f01b), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 178 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d4f3a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 178 ms_handle_reset con 0x55b81d242c00 session 0x55b81da2fa40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102522880 unmapped: 7577600 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 179 ms_handle_reset con 0x55b81d243800 session 0x55b81ac4e000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 179 ms_handle_reset con 0x55b81d34e000 session 0x55b81d1d9880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 179 heartbeat osd_stat(store_statfs(0x4fabf2000/0x0/0x4ffc00000, data 0x1189c9c/0x1298000, compress 0x0/0x0/0x0, omap 0x2154f, meta 0x3d4eab1), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 179 heartbeat osd_stat(store_statfs(0x4fabf2000/0x0/0x4ffc00000, data 0x1189c9c/0x1298000, compress 0x0/0x0/0x0, omap 0x2154f, meta 0x3d4eab1), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 179 ms_handle_reset con 0x55b81d34e400 session 0x55b81aa70fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 180 ms_handle_reset con 0x55b81b7d2800 session 0x55b81b881a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 180 ms_handle_reset con 0x55b81d242c00 session 0x55b81d4f2700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327596 data_alloc: 218103808 data_used: 6765459
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 180 ms_handle_reset con 0x55b81d243800 session 0x55b81d370540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 180 ms_handle_reset con 0x55b81d34e000 session 0x55b81d136c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102424576 unmapped: 7675904 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 180 heartbeat osd_stat(store_statfs(0x4fabef000/0x0/0x4ffc00000, data 0x118b88c/0x129b000, compress 0x0/0x0/0x0, omap 0x21807, meta 0x3d4e7f9), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 7544832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 181 ms_handle_reset con 0x55b81ceb1000 session 0x55b81df04700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 181 ms_handle_reset con 0x55b81b7d2800 session 0x55b81df62a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 181 ms_handle_reset con 0x55b81d242c00 session 0x55b81da2e700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 7544832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 181 ms_handle_reset con 0x55b81d34e000 session 0x55b81d230540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 181 ms_handle_reset con 0x55b81d243800 session 0x55b81dc3e700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333633 data_alloc: 218103808 data_used: 6765459
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 7561216 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.332550049s of 10.684814453s, submitted: 43
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 182 ms_handle_reset con 0x55b81e0e6c00 session 0x55b81ced5dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 182 ms_handle_reset con 0x55b81e0e7000 session 0x55b81d0948c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 182 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d1d8700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 182 ms_handle_reset con 0x55b81d242c00 session 0x55b81d715340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102531072 unmapped: 7569408 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 182 ms_handle_reset con 0x55b81d34e000 session 0x55b81de1a380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102539264 unmapped: 7561216 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 182 ms_handle_reset con 0x55b81e102000 session 0x55b81de1afc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 182 heartbeat osd_stat(store_statfs(0x4fabe7000/0x0/0x4ffc00000, data 0x118ef87/0x12a3000, compress 0x0/0x0/0x0, omap 0x23b57, meta 0x3d4c4a9), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 ms_handle_reset con 0x55b81d49d400 session 0x55b81ac2a700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 ms_handle_reset con 0x55b81b7d2800 session 0x55b81de1b180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 7536640 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 ms_handle_reset con 0x55b81d243800 session 0x55b81b886e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 7528448 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 ms_handle_reset con 0x55b81d242c00 session 0x55b81d392a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337992 data_alloc: 218103808 data_used: 6765829
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 7528448 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 heartbeat osd_stat(store_statfs(0x4fabe7000/0x0/0x4ffc00000, data 0x1190b15/0x12a5000, compress 0x0/0x0/0x0, omap 0x23fc5, meta 0x3d4c03b), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 7528448 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 ms_handle_reset con 0x55b81d34e000 session 0x55b81d4a2700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 7528448 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 7528448 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 heartbeat osd_stat(store_statfs(0x4fabe8000/0x0/0x4ffc00000, data 0x1190ab3/0x12a4000, compress 0x0/0x0/0x0, omap 0x24051, meta 0x3d4bfaf), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d715880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 ms_handle_reset con 0x55b81d242c00 session 0x55b81aa71500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 heartbeat osd_stat(store_statfs(0x4fabe8000/0x0/0x4ffc00000, data 0x1190ab3/0x12a4000, compress 0x0/0x0/0x0, omap 0x24051, meta 0x3d4bfaf), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337252 data_alloc: 218103808 data_used: 6765731
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102506496 unmapped: 7593984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.464176178s of 12.290282249s, submitted: 87
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 184 heartbeat osd_stat(store_statfs(0x4fabe8000/0x0/0x4ffc00000, data 0x1190ab3/0x12a4000, compress 0x0/0x0/0x0, omap 0x24051, meta 0x3d4bfaf), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 7544832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 185 ms_handle_reset con 0x55b81d243800 session 0x55b81d1361c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 102555648 unmapped: 7544832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81d49d400 session 0x55b81b887340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428782 data_alloc: 218103808 data_used: 6765731
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19685376 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81e0e7000 session 0x55b81cb928c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81d242c00 session 0x55b81d1d81c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 103677952 unmapped: 19668992 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d392c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 103661568 unmapped: 19685376 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81d243800 session 0x55b81de1b880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f9f74000/0x0/0x4ffc00000, data 0x1e00cbe/0x1f18000, compress 0x0/0x0/0x0, omap 0x24dcc, meta 0x3d4b234), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 103677952 unmapped: 19668992 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81e102000 session 0x55b81df041c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81d49d400 session 0x55b81b443a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81b7d2800 session 0x55b81caec8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 103677952 unmapped: 19668992 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81d243800 session 0x55b81d29d340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81d242c00 session 0x55b81b4f28c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f9f72000/0x0/0x4ffc00000, data 0x1e00d30/0x1f1a000, compress 0x0/0x0/0x0, omap 0x24ee0, meta 0x3d4b120), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 ms_handle_reset con 0x55b81e102000 session 0x55b81caed880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433351 data_alloc: 218103808 data_used: 6765731
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 103292928 unmapped: 20054016 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 12148736 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 7905280 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 7905280 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 heartbeat osd_stat(store_statfs(0x4f9f48000/0x0/0x4ffc00000, data 0x1e267d2/0x1f42000, compress 0x0/0x0/0x0, omap 0x25274, meta 0x3d4ad8c), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 7872512 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.283868790s of 11.647780418s, submitted: 102
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81e102c00 session 0x55b81cef1180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1507511 data_alloc: 234881024 data_used: 18144419
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 7716864 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d392380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81d242c00 session 0x55b81d1d8fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81d243800 session 0x55b81dc3e8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81e102000 session 0x55b81d3928c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 7692288 heap: 123346944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81e103000 session 0x55b81d231dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81b7d2800 session 0x55b81da39dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81d242c00 session 0x55b81b8876c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81d243800 session 0x55b81d4a2c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81e102000 session 0x55b81caecfc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 18456576 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81e103400 session 0x55b81dc3f500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 heartbeat osd_stat(store_statfs(0x4f93a3000/0x0/0x4ffc00000, data 0x29cb844/0x2ae9000, compress 0x0/0x0/0x0, omap 0x25388, meta 0x3d4ac78), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 18710528 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81d242c00 session 0x55b81df04a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81b7d2800 session 0x55b81dddc540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 heartbeat osd_stat(store_statfs(0x4f93a4000/0x0/0x4ffc00000, data 0x29cb7e2/0x2ae8000, compress 0x0/0x0/0x0, omap 0x25412, meta 0x3d4abee), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 18694144 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576880 data_alloc: 234881024 data_used: 18144419
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81e102400 session 0x55b81d715c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81e102800 session 0x55b81d715180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 18735104 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 ms_handle_reset con 0x55b81e103800 session 0x55b81dfabc00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 109043712 unmapped: 25853952 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 188 ms_handle_reset con 0x55b81d242c00 session 0x55b81d1d8c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 26181632 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 188 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 26181632 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 ms_handle_reset con 0x55b81d243800 session 0x55b81d136380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 heartbeat osd_stat(store_statfs(0x4fa02a000/0x0/0x4ffc00000, data 0x1d3fee7/0x1e5d000, compress 0x0/0x0/0x0, omap 0x25f0b, meta 0x3d4a0f5), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 ms_handle_reset con 0x55b81e102400 session 0x55b81dfaa380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 ms_handle_reset con 0x55b81b7d2800 session 0x55b81df04380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 26738688 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 ms_handle_reset con 0x55b81e102000 session 0x55b81b443340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447443 data_alloc: 218103808 data_used: 6769808
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 27557888 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.174820900s of 11.157638550s, submitted: 68
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 heartbeat osd_stat(store_statfs(0x4fa02e000/0x0/0x4ffc00000, data 0x1d3ff49/0x1e5e000, compress 0x0/0x0/0x0, omap 0x26122, meta 0x3d49ede), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 110764032 unmapped: 24133632 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 ms_handle_reset con 0x55b81e102800 session 0x55b81aa62e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 ms_handle_reset con 0x55b81b7d2800 session 0x55b81df69500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 110764032 unmapped: 24133632 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 189 handle_osd_map epochs [190,190], i have 190, src has [1,190]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 24125440 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 24109056 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 190 ms_handle_reset con 0x55b81e102400 session 0x55b81cef0fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384325 data_alloc: 218103808 data_used: 6765731
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107798528 unmapped: 27099136 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 27082752 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 190 heartbeat osd_stat(store_statfs(0x4fabd2000/0x0/0x4ffc00000, data 0x119cac7/0x12ba000, compress 0x0/0x0/0x0, omap 0x26823, meta 0x3d497dd), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 190 ms_handle_reset con 0x55b81d242c00 session 0x55b81d4f36c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 190 ms_handle_reset con 0x55b81e103c00 session 0x55b81caeddc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 27656192 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 27656192 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 191 ms_handle_reset con 0x55b81d242c00 session 0x55b81df62700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 191 ms_handle_reset con 0x55b81d243800 session 0x55b81aa63a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 27648000 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 191 heartbeat osd_stat(store_statfs(0x4fabcd000/0x0/0x4ffc00000, data 0x119e6d3/0x12bd000, compress 0x0/0x0/0x0, omap 0x26f1c, meta 0x3d490e4), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 191 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d4a3c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386627 data_alloc: 218103808 data_used: 6765731
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 27648000 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.218285084s of 10.014035225s, submitted: 80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 27648000 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 192 heartbeat osd_stat(store_statfs(0x4fabcd000/0x0/0x4ffc00000, data 0x119e6d3/0x12bd000, compress 0x0/0x0/0x0, omap 0x26fa6, meta 0x3d4905a), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 192 ms_handle_reset con 0x55b81e102800 session 0x55b81df68540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 192 ms_handle_reset con 0x55b81e102400 session 0x55b81dc3ec40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 27648000 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 192 ms_handle_reset con 0x55b81b7d2800 session 0x55b81de1bc00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 27631616 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 27631616 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390366 data_alloc: 218103808 data_used: 6766003
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 27615232 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 192 ms_handle_reset con 0x55b81d242c00 session 0x55b81b886540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 27615232 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81d243800 session 0x55b81de1aa80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 27557888 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81e102800 session 0x55b81df2c700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81b2b4800 session 0x55b81ab5d180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fa2b4000/0x0/0x4ffc00000, data 0x1ab6bed/0x1bd8000, compress 0x0/0x0/0x0, omap 0x27930, meta 0x3d486d0), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 27557888 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81b7d2800 session 0x55b81de1b500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81d242c00 session 0x55b81aa63880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 27549696 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fa2b3000/0x0/0x4ffc00000, data 0x1ab6c4f/0x1bd9000, compress 0x0/0x0/0x0, omap 0x27e8f, meta 0x3d48171), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452424 data_alloc: 218103808 data_used: 6766275
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 27549696 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81d243800 session 0x55b81b38aa80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 27549696 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fa2b3000/0x0/0x4ffc00000, data 0x1ab6bed/0x1bd8000, compress 0x0/0x0/0x0, omap 0x2802c, meta 0x3d47fd4), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fa2b3000/0x0/0x4ffc00000, data 0x1ab6bed/0x1bd8000, compress 0x0/0x0/0x0, omap 0x2802c, meta 0x3d47fd4), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81e102800 session 0x55b81d4a2fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.364843369s of 11.629864693s, submitted: 83
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81b2b5c00 session 0x55b81d392e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 27549696 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 27549696 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d1d8380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81d242c00 session 0x55b81de1b340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 27549696 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81d243800 session 0x55b81d7156c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452773 data_alloc: 218103808 data_used: 6766275
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107347968 unmapped: 27549696 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 27123712 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dddca80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81e102800 session 0x55b81de1a700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 22355968 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 heartbeat osd_stat(store_statfs(0x4f9e66000/0x0/0x4ffc00000, data 0x1f03bfd/0x2026000, compress 0x0/0x0/0x0, omap 0x2813c, meta 0x3d47ec4), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 22355968 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81de88c00 session 0x55b81b442e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81aa7f400 session 0x55b81ced41c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81b7d2800 session 0x55b81da38fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112541696 unmapped: 22355968 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81d242c00 session 0x55b81da2f340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1542300 data_alloc: 234881024 data_used: 16277187
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 27910144 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 28008448 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81de89000 session 0x55b81df62540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81ceb5800 session 0x55b81da2f180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 26943488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81aa7f400 session 0x55b81db07880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fa77b000/0x0/0x4ffc00000, data 0x15eec10/0x1711000, compress 0x0/0x0/0x0, omap 0x284a4, meta 0x3d47b5c), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 26943488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81d243800 session 0x55b81d095180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 26943488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.661386490s of 12.823336601s, submitted: 37
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1460375 data_alloc: 234881024 data_used: 11065539
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 27787264 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 27787264 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d1d9340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 27787264 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 27787264 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 ms_handle_reset con 0x55b81d242c00 session 0x55b81caeca80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fabc8000/0x0/0x4ffc00000, data 0x11a1bed/0x12c3000, compress 0x0/0x0/0x0, omap 0x28884, meta 0x3d4777c), peers [0,2] op hist [0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 28033024 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411804 data_alloc: 218103808 data_used: 6770355
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 194 ms_handle_reset con 0x55b81aa7f400 session 0x55b81db061c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 28033024 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 28221440 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 194 heartbeat osd_stat(store_statfs(0x4f9bc5000/0x0/0x4ffc00000, data 0x21a37ad/0x22c7000, compress 0x0/0x0/0x0, omap 0x28b71, meta 0x3d4748f), peers [0,2] op hist [0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 106725376 unmapped: 28172288 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 106717184 unmapped: 28180480 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107741184 unmapped: 27156480 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.607764244s of 10.048481941s, submitted: 53
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 195 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x31a539f/0x32ca000, compress 0x0/0x0/0x0, omap 0x28e60, meta 0x3d471a0), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1580978 data_alloc: 218103808 data_used: 6770355
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107741184 unmapped: 27156480 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 18767872 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 195 heartbeat osd_stat(store_statfs(0x4f7bc2000/0x0/0x4ffc00000, data 0x41a53a3/0x42ca000, compress 0x0/0x0/0x0, omap 0x28e60, meta 0x3d471a0), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107741184 unmapped: 27156480 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 195 ms_handle_reset con 0x55b81ceb5800 session 0x55b81ddddc00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107741184 unmapped: 27156480 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107741184 unmapped: 27156480 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1708401 data_alloc: 218103808 data_used: 6770355
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107741184 unmapped: 27156480 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 195 heartbeat osd_stat(store_statfs(0x4f73c0000/0x0/0x4ffc00000, data 0x49a53d6/0x4acc000, compress 0x0/0x0/0x0, omap 0x28e60, meta 0x3d471a0), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 27107328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 196 ms_handle_reset con 0x55b81de88c00 session 0x55b81cef1500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 27107328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 196 ms_handle_reset con 0x55b81d243800 session 0x55b81cb93880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107790336 unmapped: 27107328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 196 ms_handle_reset con 0x55b81de89000 session 0x55b81df63a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107798528 unmapped: 27099136 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 196 heartbeat osd_stat(store_statfs(0x4f6bbb000/0x0/0x4ffc00000, data 0x51a701b/0x52d1000, compress 0x0/0x0/0x0, omap 0x291a3, meta 0x3d46e5d), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.899371624s of 10.147893906s, submitted: 51
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1756096 data_alloc: 218103808 data_used: 6770469
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 196 heartbeat osd_stat(store_statfs(0x4f6bbb000/0x0/0x4ffc00000, data 0x51a701b/0x52d1000, compress 0x0/0x0/0x0, omap 0x291a3, meta 0x3d46e5d), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 27090944 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 27090944 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18677760 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 27058176 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 198 ms_handle_reset con 0x55b81aa7f400 session 0x55b81da2e000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 27058176 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 198 heartbeat osd_stat(store_statfs(0x4f5bb7000/0x0/0x4ffc00000, data 0x61aa605/0x62d5000, compress 0x0/0x0/0x0, omap 0x29858, meta 0x3d467a8), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 198 heartbeat osd_stat(store_statfs(0x4f5bb7000/0x0/0x4ffc00000, data 0x61aa605/0x62d5000, compress 0x0/0x0/0x0, omap 0x29858, meta 0x3d467a8), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1843626 data_alloc: 218103808 data_used: 6771019
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 27058176 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 27041792 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 198 ms_handle_reset con 0x55b81ceb5800 session 0x55b81caec000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 26992640 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 26992640 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 198 heartbeat osd_stat(store_statfs(0x4f43b8000/0x0/0x4ffc00000, data 0x79aa5d2/0x7ad3000, compress 0x0/0x0/0x0, omap 0x29858, meta 0x3d467a8), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26984448 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.663762569s of 10.010786057s, submitted: 71
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1967029 data_alloc: 218103808 data_used: 6775029
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 198 ms_handle_reset con 0x55b81d243800 session 0x55b81dddd180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 107913216 unmapped: 26984448 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 198 ms_handle_reset con 0x55b81de88c00 session 0x55b81cef1a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 heartbeat osd_stat(store_statfs(0x4f33b9000/0x0/0x4ffc00000, data 0x89aa5d2/0x8ad3000, compress 0x0/0x0/0x0, omap 0x298e2, meta 0x3d4671e), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 26886144 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 26886144 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 ms_handle_reset con 0x55b81de89800 session 0x55b81da38c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 18448384 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 26796032 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2178002 data_alloc: 218103808 data_used: 6775029
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 18317312 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 heartbeat osd_stat(store_statfs(0x4f13b6000/0x0/0x4ffc00000, data 0xa9ac061/0xaad6000, compress 0x0/0x0/0x0, omap 0x29eb4, meta 0x3d4614c), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 ms_handle_reset con 0x55b81de89400 session 0x55b81a801c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 26705920 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 ms_handle_reset con 0x55b81aa7f400 session 0x55b81b38ba40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 18194432 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 heartbeat osd_stat(store_statfs(0x4f03b6000/0x0/0x4ffc00000, data 0xb9ac061/0xbad6000, compress 0x0/0x0/0x0, omap 0x29eb4, meta 0x3d4614c), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 26583040 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 ms_handle_reset con 0x55b81ceb5800 session 0x55b81b8be8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 26550272 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.378383160s of 10.097388268s, submitted: 66
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2388776 data_alloc: 218103808 data_used: 6775029
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 26435584 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 ms_handle_reset con 0x55b81d243800 session 0x55b81df048c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 ms_handle_reset con 0x55b81de89c00 session 0x55b81de1b6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 ms_handle_reset con 0x55b81de88c00 session 0x55b81d230380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 109576192 unmapped: 25321472 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 25214976 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 200 ms_handle_reset con 0x55b81aa7f400 session 0x55b81da39c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 25149440 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 200 heartbeat osd_stat(store_statfs(0x4ec3af000/0x0/0x4ffc00000, data 0xf9adcc1/0xfadb000, compress 0x0/0x0/0x0, omap 0x2a49c, meta 0x3d45b64), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 25018368 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 201 ms_handle_reset con 0x55b81ceb5800 session 0x55b81aa62fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2645892 data_alloc: 218103808 data_used: 6775127
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 33374208 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 33267712 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 118497280 unmapped: 24797184 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 202 ms_handle_reset con 0x55b81d243800 session 0x55b81df2d6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111263744 unmapped: 32030720 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 202 heartbeat osd_stat(store_statfs(0x4e93a8000/0x0/0x4ffc00000, data 0x129b145b/0x12ae2000, compress 0x0/0x0/0x0, omap 0x2ad4e, meta 0x3d452b2), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 202 ms_handle_reset con 0x55b81de89400 session 0x55b81de1ac40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 23601152 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 202 heartbeat osd_stat(store_statfs(0x4e83a8000/0x0/0x4ffc00000, data 0x139b145b/0x13ae2000, compress 0x0/0x0/0x0, omap 0x2aebf, meta 0x3d45141), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.041048527s of 10.038273811s, submitted: 109
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985224 data_alloc: 218103808 data_used: 6775127
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 202 ms_handle_reset con 0x55b81aa7f400 session 0x55b81de1ba40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111378432 unmapped: 31916032 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 202 ms_handle_reset con 0x55b81d243800 session 0x55b81df2ce00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 202 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 203 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dfaa700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 203 ms_handle_reset con 0x55b81de88c00 session 0x55b81dc06000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 203 ms_handle_reset con 0x55b81de88000 session 0x55b81b4f2380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111484928 unmapped: 31809536 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 204 ms_handle_reset con 0x55b81aa7f400 session 0x55b81dfab880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 204 ms_handle_reset con 0x55b81b7d2800 session 0x55b81df62000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 204 heartbeat osd_stat(store_statfs(0x4e83a0000/0x0/0x4ffc00000, data 0x139b4d70/0x13ae8000, compress 0x0/0x0/0x0, omap 0x2bbb1, meta 0x3d4444f), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 31793152 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 204 ms_handle_reset con 0x55b81ceb5800 session 0x55b81aa62c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111525888 unmapped: 31768576 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 31744000 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 205 ms_handle_reset con 0x55b81d243800 session 0x55b81aa636c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501185 data_alloc: 218103808 data_used: 6775127
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 205 ms_handle_reset con 0x55b81de88c00 session 0x55b81dc3efc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 205 ms_handle_reset con 0x55b81de88c00 session 0x55b81dfabdc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 205 ms_handle_reset con 0x55b81aa7f400 session 0x55b81dddc000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 31563776 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 205 ms_handle_reset con 0x55b81b7d2800 session 0x55b81df2c8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 206 ms_handle_reset con 0x55b81ceb5800 session 0x55b81da2f500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 31555584 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 31555584 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f9b9e000/0x0/0x4ffc00000, data 0x11b8416/0x12e9000, compress 0x0/0x0/0x0, omap 0x2c7e2, meta 0x3d4381e), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 31555584 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 31539200 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1501807 data_alloc: 218103808 data_used: 6775029
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 31539200 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.372041702s of 10.494384766s, submitted: 214
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 206 ms_handle_reset con 0x55b81d243800 session 0x55b81b38bdc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 31531008 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 31531008 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 208 heartbeat osd_stat(store_statfs(0x4fab9d000/0x0/0x4ffc00000, data 0x11b9f6f/0x12ed000, compress 0x0/0x0/0x0, omap 0x2cbc2, meta 0x3d4343e), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 31522816 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 208 ms_handle_reset con 0x55b81aa7f400 session 0x55b81aa62a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 31522816 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 208 ms_handle_reset con 0x55b81b7d2800 session 0x55b81aa63180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 209 ms_handle_reset con 0x55b81ceb5800 session 0x55b81cef1c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512880 data_alloc: 218103808 data_used: 6779493
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 31490048 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 210 ms_handle_reset con 0x55b81de88c00 session 0x55b81b38b340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 210 ms_handle_reset con 0x55b81de88400 session 0x55b81cef01c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fab95000/0x0/0x4ffc00000, data 0x11bf190/0x12f5000, compress 0x0/0x0/0x0, omap 0x2d4ab, meta 0x3d42b55), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513452 data_alloc: 218103808 data_used: 6779331
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.434180260s of 10.041487694s, submitted: 111
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 210 ms_handle_reset con 0x55b81aa7f400 session 0x55b81b8bee00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fab95000/0x0/0x4ffc00000, data 0x11bf190/0x12f5000, compress 0x0/0x0/0x0, omap 0x2d535, meta 0x3d42acb), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d1d9880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518702 data_alloc: 218103808 data_used: 6779603
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fab94000/0x0/0x4ffc00000, data 0x11c0c0f/0x12f8000, compress 0x0/0x0/0x0, omap 0x2daee, meta 0x3d42512), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81ceb5800 session 0x55b81b4f3340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81de88c00 session 0x55b81b886a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b559c00 session 0x55b81df63500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 31858688 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81aa7f400 session 0x55b81b38a1c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d715a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81ceb5800 session 0x55b81b38b6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fa640000/0x0/0x4ffc00000, data 0x1713c38/0x184c000, compress 0x0/0x0/0x0, omap 0x2daee, meta 0x3d42512), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 30203904 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81de88c00 session 0x55b81ac05500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b559800 session 0x55b81da39a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1601190 data_alloc: 218103808 data_used: 6779603
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 30416896 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fa1c3000/0x0/0x4ffc00000, data 0x1b8fcd3/0x1cc9000, compress 0x0/0x0/0x0, omap 0x2daee, meta 0x3d42512), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fa1c3000/0x0/0x4ffc00000, data 0x1b8fcd3/0x1cc9000, compress 0x0/0x0/0x0, omap 0x2daee, meta 0x3d42512), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 30384128 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81aa7f400 session 0x55b81cef0000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 30384128 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b559800 session 0x55b81caec700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 30384128 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 30384128 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b7d2800 session 0x55b81dfaa1c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.772394180s of 14.129716873s, submitted: 84
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81ceb5800 session 0x55b81da2f880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fa1c3000/0x0/0x4ffc00000, data 0x1b8fcd3/0x1cc9000, compress 0x0/0x0/0x0, omap 0x2daee, meta 0x3d42512), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1599567 data_alloc: 218103808 data_used: 6779603
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81de88c00 session 0x55b81b8bfdc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 30351360 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 29925376 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 29114368 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fa1c1000/0x0/0x4ffc00000, data 0x1b8fd19/0x1ccb000, compress 0x0/0x0/0x0, omap 0x2dd5b, meta 0x3d422a5), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 27516928 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 25608192 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1657025 data_alloc: 234881024 data_used: 15625427
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 25608192 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 25608192 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 25608192 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81ceb5800 session 0x55b81a770fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 24363008 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fa1c1000/0x0/0x4ffc00000, data 0x1b8fd19/0x1ccb000, compress 0x0/0x0/0x0, omap 0x2dd5b, meta 0x3d422a5), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b559400 session 0x55b81b8861c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 24363008 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.722727776s of 10.262310028s, submitted: 19
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1659065 data_alloc: 234881024 data_used: 16170195
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b559000 session 0x55b81caecc40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fa1c1000/0x0/0x4ffc00000, data 0x1b8fd19/0x1ccb000, compress 0x0/0x0/0x0, omap 0x2de7a, meta 0x3d42186), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 24231936 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 20561920 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 123158528 unmapped: 20135936 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f9d49000/0x0/0x4ffc00000, data 0x2006d29/0x2143000, compress 0x0/0x0/0x0, omap 0x2df04, meta 0x3d420fc), peers [0,2] op hist [0,0,0,0,0,0,0,0,1,8])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 11452416 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b558c00 session 0x55b81b7ab880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f9404000/0x0/0x4ffc00000, data 0x294bd29/0x2a88000, compress 0x0/0x0/0x0, omap 0x2df04, meta 0x3d420fc), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,15,3])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128851968 unmapped: 14442496 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f8d89000/0x0/0x4ffc00000, data 0x2fc6d29/0x3103000, compress 0x0/0x0/0x0, omap 0x2df04, meta 0x3d420fc), peers [0,2] op hist [0,0,0,0,0,0,4,0,3,10])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1806439 data_alloc: 234881024 data_used: 16194787
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130392064 unmapped: 12902400 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f891b000/0x0/0x4ffc00000, data 0x342ed29/0x356b000, compress 0x0/0x0/0x0, omap 0x2df04, meta 0x3d420fc), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129744896 unmapped: 13549568 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 14573568 heap: 143294464 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81de8a800 session 0x55b81dc3fc00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 9183232 heap: 146972672 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b558c00 session 0x55b81ac4f180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 ms_handle_reset con 0x55b81b559000 session 0x55b81a801180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 131547136 unmapped: 15425536 heap: 146972672 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f7af0000/0x0/0x4ffc00000, data 0x4259d29/0x4396000, compress 0x0/0x0/0x0, omap 0x2e018, meta 0x3d41fe8), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1914371 data_alloc: 234881024 data_used: 18266835
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 131547136 unmapped: 15425536 heap: 146972672 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.345577955s of 10.907817841s, submitted: 276
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 212 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dfaba40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132866048 unmapped: 14106624 heap: 146972672 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 213 ms_handle_reset con 0x55b81ae88800 session 0x55b81d393c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 213 ms_handle_reset con 0x55b81b8bc000 session 0x55b81ac05dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 213 ms_handle_reset con 0x55b81b559400 session 0x55b81a801500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132890624 unmapped: 14082048 heap: 146972672 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 213 ms_handle_reset con 0x55b81b8bc000 session 0x55b81ac04e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 14778368 heap: 146972672 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f7430000/0x0/0x4ffc00000, data 0x4912aa6/0x4a52000, compress 0x0/0x0/0x0, omap 0x2e4e1, meta 0x3d41b1f), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132349952 unmapped: 14622720 heap: 146972672 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1966573 data_alloc: 234881024 data_used: 18279139
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 213 ms_handle_reset con 0x55b81b558c00 session 0x55b81dfab500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132268032 unmapped: 23101440 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 213 ms_handle_reset con 0x55b81b559000 session 0x55b81caec1c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 214 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81ac2bdc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 214 ms_handle_reset con 0x55b81ae88800 session 0x55b81a8008c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22872064 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f6af1000/0x0/0x4ffc00000, data 0x5258642/0x5399000, compress 0x0/0x0/0x0, omap 0x2e803, meta 0x3d417fd), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132530176 unmapped: 22839296 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 214 ms_handle_reset con 0x55b81b558c00 session 0x55b81dddddc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 215 ms_handle_reset con 0x55b81b559400 session 0x55b81dfab340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 133595136 unmapped: 21774336 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 216 ms_handle_reset con 0x55b81b559000 session 0x55b81d095880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 216 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d136fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 133611520 unmapped: 21757952 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2037537 data_alloc: 234881024 data_used: 18280744
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 21741568 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.641528130s of 10.264565468s, submitted: 60
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81ae88800 session 0x55b81ac2ae00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 133521408 unmapped: 21848064 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 heartbeat osd_stat(store_statfs(0x4f6aea000/0x0/0x4ffc00000, data 0x525bd7a/0x539f000, compress 0x0/0x0/0x0, omap 0x2f114, meta 0x3d40eec), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 21815296 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81ceb5800 session 0x55b81a770540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 21815296 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 133554176 unmapped: 21815296 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81aa7f400 session 0x55b81db07340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81b559800 session 0x55b81de1a8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81de8b800 session 0x55b81dddc700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1885025 data_alloc: 234881024 data_used: 12899112
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 26992640 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 heartbeat osd_stat(store_statfs(0x4f7c0b000/0x0/0x4ffc00000, data 0x3df18e5/0x3f34000, compress 0x0/0x0/0x0, omap 0x2fae3, meta 0x3d4051d), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 26992640 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 26992640 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81aa7f400 session 0x55b81df04000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 heartbeat osd_stat(store_statfs(0x4f7c08000/0x0/0x4ffc00000, data 0x3df48e5/0x3f37000, compress 0x0/0x0/0x0, omap 0x2fae3, meta 0x3d4051d), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 27000832 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81d34e000 session 0x55b81b4f3dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81d34fc00 session 0x55b81b38ac40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 27000832 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81d42c800 session 0x55b81a770a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 ms_handle_reset con 0x55b81e0a5c00 session 0x55b81ac2b6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1887995 data_alloc: 234881024 data_used: 12907171
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 218 ms_handle_reset con 0x55b81d34ec00 session 0x55b81d094c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 27312128 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 218 ms_handle_reset con 0x55b81d34fc00 session 0x55b81d4f3500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 23199744 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.604388237s of 10.897364616s, submitted: 142
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81ae88800 session 0x55b81da2ec40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 22429696 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81b7d2800 session 0x55b81da2e540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81d42c800 session 0x55b81b8bf6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 24788992 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81ae88800 session 0x55b81ac04c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 heartbeat osd_stat(store_statfs(0x4f9105000/0x0/0x4ffc00000, data 0x2c43f07/0x2d87000, compress 0x0/0x0/0x0, omap 0x30709, meta 0x3d3f8f7), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81b7d2800 session 0x55b81ac2b500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 25255936 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805880 data_alloc: 234881024 data_used: 15749112
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81d34ec00 session 0x55b81dfaaa80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 25255936 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 25239552 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 heartbeat osd_stat(store_statfs(0x4f9103000/0x0/0x4ffc00000, data 0x2c43f79/0x2d89000, compress 0x0/0x0/0x0, omap 0x30929, meta 0x3d3f6d7), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81d34fc00 session 0x55b81b7aac40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130129920 unmapped: 25239552 heap: 155369472 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81e0a5c00 session 0x55b81d29c1c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81b7d2800 session 0x55b81ac4ee00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81ae88800 session 0x55b81b443dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81d34ec00 session 0x55b81caed340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81d34fc00 session 0x55b81a771180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 heartbeat osd_stat(store_statfs(0x4f9105000/0x0/0x4ffc00000, data 0x2c43f07/0x2d87000, compress 0x0/0x0/0x0, omap 0x30b37, meta 0x3d3f4c9), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 29007872 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 heartbeat osd_stat(store_statfs(0x4f8294000/0x0/0x4ffc00000, data 0x3ab3f17/0x3bf8000, compress 0x0/0x0/0x0, omap 0x30dd1, meta 0x3d3f22f), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130170880 unmapped: 28876800 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 ms_handle_reset con 0x55b81e0e7800 session 0x55b81dc06e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 220 ms_handle_reset con 0x55b81ae88800 session 0x55b81aa70c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 220 heartbeat osd_stat(store_statfs(0x4f8293000/0x0/0x4ffc00000, data 0x3ab3f79/0x3bf9000, compress 0x0/0x0/0x0, omap 0x30dd1, meta 0x3d3f22f), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1938093 data_alloc: 234881024 data_used: 15761435
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 27074560 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 221 ms_handle_reset con 0x55b81b7d2800 session 0x55b81d392700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 221 ms_handle_reset con 0x55b81e0e7c00 session 0x55b81ac056c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 133660672 unmapped: 25387008 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.204244614s of 10.002515793s, submitted: 227
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 221 ms_handle_reset con 0x55b81d34ec00 session 0x55b81aa62380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 24682496 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 24682496 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 17948672 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 221 ms_handle_reset con 0x55b81d0adc00 session 0x55b81dfaa8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f7005000/0x0/0x4ffc00000, data 0x5256715/0x4e87000, compress 0x0/0x0/0x0, omap 0x31cdc, meta 0x3d3e324), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 222 ms_handle_reset con 0x55b81e0e7400 session 0x55b81ac4f500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 222 ms_handle_reset con 0x55b81d34fc00 session 0x55b81a771500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2099300 data_alloc: 234881024 data_used: 15987328
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 222 heartbeat osd_stat(store_statfs(0x4f7005000/0x0/0x4ffc00000, data 0x5256715/0x4e87000, compress 0x0/0x0/0x0, omap 0x31cdc, meta 0x3d3e324), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 24231936 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 24231936 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 223 ms_handle_reset con 0x55b81d0adc00 session 0x55b81cb92000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 223 ms_handle_reset con 0x55b81ae88800 session 0x55b81b442540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 24141824 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 224 ms_handle_reset con 0x55b81b7d2800 session 0x55b81b4428c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 24092672 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 224 ms_handle_reset con 0x55b81aa7f400 session 0x55b81ac4fa40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 224 ms_handle_reset con 0x55b81d34e000 session 0x55b81b38b880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 224 ms_handle_reset con 0x55b81d0adc00 session 0x55b81b86e700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 224 ms_handle_reset con 0x55b81d34fc00 session 0x55b81d714380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 24092672 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 225 ms_handle_reset con 0x55b81ae88800 session 0x55b81b86f6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2022690 data_alloc: 234881024 data_used: 15886878
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 225 ms_handle_reset con 0x55b81aa7f400 session 0x55b81ac4f6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 24199168 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 225 heartbeat osd_stat(store_statfs(0x4f7d39000/0x0/0x4ffc00000, data 0x451fa4b/0x4153000, compress 0x0/0x0/0x0, omap 0x32ebb, meta 0x3d3d145), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 225 ms_handle_reset con 0x55b81e0e7c00 session 0x55b81ac4f880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 24199168 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 225 ms_handle_reset con 0x55b81dfeb800 session 0x55b81a7701c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.039920807s of 10.490316391s, submitted: 145
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 225 ms_handle_reset con 0x55b81dfea800 session 0x55b81ac2afc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 24199168 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 225 heartbeat osd_stat(store_statfs(0x4f8ba6000/0x0/0x4ffc00000, data 0x36b16a9/0x32e6000, compress 0x0/0x0/0x0, omap 0x335b0, meta 0x3d3ca50), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 226 ms_handle_reset con 0x55b81dfe7400 session 0x55b81caeda40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 226 ms_handle_reset con 0x55b81dfe6800 session 0x55b81b38a000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 226 ms_handle_reset con 0x55b81dfeb800 session 0x55b81d1d8540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 226 ms_handle_reset con 0x55b81e0e7c00 session 0x55b81de1a540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 226 ms_handle_reset con 0x55b81dfea800 session 0x55b81ac04fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 226 ms_handle_reset con 0x55b81aa7f400 session 0x55b81b442fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 29843456 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 226 ms_handle_reset con 0x55b81dfea800 session 0x55b81aa63340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 227 ms_handle_reset con 0x55b81dfeb800 session 0x55b81aa636c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129286144 unmapped: 29761536 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 228 ms_handle_reset con 0x55b81e0e7c00 session 0x55b81a771c00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1667241 data_alloc: 218103808 data_used: 7324699
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 228 ms_handle_reset con 0x55b81dfe6800 session 0x55b81da2e380
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 29704192 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 29704192 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 228 heartbeat osd_stat(store_statfs(0x4fab5c000/0x0/0x4ffc00000, data 0x11de657/0x132e000, compress 0x0/0x0/0x0, omap 0x342d1, meta 0x3d3bd2f), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81aca0000 session 0x55b81dfaafc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81dfe6800 session 0x55b81dfab880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81dfea800 session 0x55b81dfaa700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 29704192 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 29704192 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81dfeb800 session 0x55b81b8befc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81e0e7c00 session 0x55b81a800e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81e0e0400 session 0x55b81de1ac40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 29949952 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81dfe6800 session 0x55b81da38540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81dfea800 session 0x55b81b443dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1668796 data_alloc: 218103808 data_used: 7325300
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 ms_handle_reset con 0x55b81e0e0800 session 0x55b81caedc00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 29933568 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 230 ms_handle_reset con 0x55b81d4b0000 session 0x55b81b4f3180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 29933568 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 29933568 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086652756s of 10.417056084s, submitted: 170
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 232 ms_handle_reset con 0x55b81dfeb800 session 0x55b81b443a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 232 heartbeat osd_stat(store_statfs(0x4fab52000/0x0/0x4ffc00000, data 0x11e37af/0x1336000, compress 0x0/0x0/0x0, omap 0x352fc, meta 0x3d3ad04), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 232 ms_handle_reset con 0x55b81d4b0000 session 0x55b81d1d8000
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.7 total, 600.0 interval#012Cumulative writes: 14K writes, 55K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4138 syncs, 3.47 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5931 writes, 20K keys, 5931 commit groups, 1.0 writes per commit group, ingest: 13.45 MB, 0.02 MB/s#012Interval WAL: 5931 writes, 2427 syncs, 2.44 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 29933568 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 233 ms_handle_reset con 0x55b81dfe6800 session 0x55b81ac4fdc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 29892608 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1679124 data_alloc: 218103808 data_used: 7325186
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 29892608 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 234 ms_handle_reset con 0x55b81dfea800 session 0x55b81d1d9dc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 234 ms_handle_reset con 0x55b81e0e0800 session 0x55b81b8bec40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 29892608 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 234 ms_handle_reset con 0x55b81e0e7c00 session 0x55b81de1b500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 31088640 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 235 ms_handle_reset con 0x55b81e0e0000 session 0x55b81b8bf500
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 235 heartbeat osd_stat(store_statfs(0x4fab4e000/0x0/0x4ffc00000, data 0x11e8b55/0x133e000, compress 0x0/0x0/0x0, omap 0x35aee, meta 0x3d3a512), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 31080448 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 31080448 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 235 heartbeat osd_stat(store_statfs(0x4fab48000/0x0/0x4ffc00000, data 0x11ea74c/0x1342000, compress 0x0/0x0/0x0, omap 0x35bb7, meta 0x3d3a449), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1689465 data_alloc: 218103808 data_used: 6801530
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 235 ms_handle_reset con 0x55b81d4b0000 session 0x55b81a771340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 31080448 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 235 ms_handle_reset con 0x55b81dfea800 session 0x55b81da39340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 235 ms_handle_reset con 0x55b81dfe6800 session 0x55b81a800700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81e0e0800 session 0x55b81d1d8e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 31088640 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81d4b0000 session 0x55b81de1b6c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 31088640 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.188632965s of 10.533349991s, submitted: 90
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81dfe6800 session 0x55b81ac4e1c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 31088640 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81e0e0000 session 0x55b81b4421c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81de53c00 session 0x55b81ac05880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81e0e5c00 session 0x55b81b443340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81e0e5c00 session 0x55b81dfabc00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81d4b0000 session 0x55b81de1a700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81de53c00 session 0x55b81ac2aa80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81dfe6800 session 0x55b81aa62e00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81e0e0000 session 0x55b81aa63880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 ms_handle_reset con 0x55b81d4b0000 session 0x55b81aa63a40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 heartbeat osd_stat(store_statfs(0x4fab47000/0x0/0x4ffc00000, data 0x11ec21f/0x1345000, compress 0x0/0x0/0x0, omap 0x362db, meta 0x3d39d25), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 30818304 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 237 ms_handle_reset con 0x55b81dfea800 session 0x55b81d4f2fc0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742483 data_alloc: 218103808 data_used: 6801830
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128262144 unmapped: 30785536 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128262144 unmapped: 30785536 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128262144 unmapped: 30785536 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128262144 unmapped: 30785536 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 237 heartbeat osd_stat(store_statfs(0x4fa314000/0x0/0x4ffc00000, data 0x1a1be18/0x1b76000, compress 0x0/0x0/0x0, omap 0x36880, meta 0x3d39780), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 237 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128262144 unmapped: 30785536 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 238 ms_handle_reset con 0x55b81a12d800 session 0x55b8190f3340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1745113 data_alloc: 218103808 data_used: 6801830
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128270336 unmapped: 30777344 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 239 ms_handle_reset con 0x55b81dfe6800 session 0x55b81ac2b880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: mgrc ms_handle_reset ms_handle_reset con 0x55b81b889400
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2811058765
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2811058765,v1:192.168.122.100:6801/2811058765]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: mgrc handle_mgr_configure stats_period=5
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 239 ms_handle_reset con 0x55b81e0e5c00 session 0x55b81aa62c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 239 ms_handle_reset con 0x55b81bb32000 session 0x55b81b4f2a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 30810112 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 239 ms_handle_reset con 0x55b81b8b7800 session 0x55b81d714700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 29065216 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.129989624s of 10.341934204s, submitted: 81
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 239 ms_handle_reset con 0x55b81b429400 session 0x55b81d4f36c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 29065216 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 240 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b442540
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 240 heartbeat osd_stat(store_statfs(0x4fa2e0000/0x0/0x4ffc00000, data 0x1a493a4/0x1ba8000, compress 0x0/0x0/0x0, omap 0x36ead, meta 0x3d39153), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 29065216 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1817155 data_alloc: 234881024 data_used: 15289896
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 ms_handle_reset con 0x55b81dc12400 session 0x55b81dfab500
Jan 31 00:04:55 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 ms_handle_reset con 0x55b81e0e5c00 session 0x55b81b8be1c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 heartbeat osd_stat(store_statfs(0x4fa2da000/0x0/0x4ffc00000, data 0x1a4cb4e/0x1bb0000, compress 0x0/0x0/0x0, omap 0x37170, meta 0x3d38e90), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 129998848 unmapped: 29048832 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 ms_handle_reset con 0x55b81dc12800 session 0x55b81b8be8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 ms_handle_reset con 0x55b81b429400 session 0x55b81ac2a700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 28884992 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81d4f21c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 ms_handle_reset con 0x55b81dc12400 session 0x55b81b8bfa40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 ms_handle_reset con 0x55b81e0e5c00 session 0x55b81caece00
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 28860416 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 ms_handle_reset con 0x55b81dc13000 session 0x55b81de1a8c0
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81dc12c00 session 0x55b81df68a80
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130506752 unmapped: 28540928 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81dc06c40
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81dc12400 session 0x55b81b8bf340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 28532736 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1893718 data_alloc: 234881024 data_used: 15290481
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81e0e5c00 session 0x55b81da2f880
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81b429400 session 0x55b81b8bf180
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 28884992 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81dc13400 session 0x55b81caed340
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f9896000/0x0/0x4ffc00000, data 0x248e7a0/0x25f4000, compress 0x0/0x0/0x0, omap 0x37ed5, meta 0x3d3812b), peers [0,2] op hist [])
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81a800700
Jan 31 00:04:55 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81dc12400 session 0x55b81de1b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 28549120 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81dc12c00 session 0x55b8190f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81e0e5c00 session 0x55b81ac05880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81aa63880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f9001000/0x0/0x4ffc00000, data 0x2d1e73e/0x2e83000, compress 0x0/0x0/0x0, omap 0x3823c, meta 0x3d37dc4), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136093696 unmapped: 22953984 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81dc12400 session 0x55b81aa63a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81dc12c00 session 0x55b81aa62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81dc13400 session 0x55b81d4f2fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 22659072 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81d85a000 session 0x55b81b8be380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.037950516s of 10.751073837s, submitted: 193
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f8fd8000/0x0/0x4ffc00000, data 0x2d486dc/0x2eac000, compress 0x0/0x0/0x0, omap 0x3851d, meta 0x3d37ae3), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81caed880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 11436032 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2025511 data_alloc: 234881024 data_used: 26321009
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 12648448 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81dc12400 session 0x55b81b38ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81dc12c00 session 0x55b81da388c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146407424 unmapped: 12640256 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146407424 unmapped: 12640256 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146407424 unmapped: 12640256 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 heartbeat osd_stat(store_statfs(0x4f8fdc000/0x0/0x4ffc00000, data 0x2d4d292/0x2eb0000, compress 0x0/0x0/0x0, omap 0x386af, meta 0x3d37951), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146407424 unmapped: 12640256 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81dc13400 session 0x55b81dfaa540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 heartbeat osd_stat(store_statfs(0x4f8fdc000/0x0/0x4ffc00000, data 0x2d4d292/0x2eb0000, compress 0x0/0x0/0x0, omap 0x386af, meta 0x3d37951), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2025431 data_alloc: 234881024 data_used: 26320895
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146407424 unmapped: 12640256 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81d85a800 session 0x55b81aa62700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146669568 unmapped: 12378112 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81dc12400 session 0x55b81d1d81c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81aa62e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81dc12c00 session 0x55b81da381c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81dc13400 session 0x55b81b442380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146669568 unmapped: 12378112 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81d85b000 session 0x55b81de1a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2aa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 ms_handle_reset con 0x55b81dc12400 session 0x55b81da38fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 9191424 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 243 handle_osd_map epochs [243,244], i have 243, src has [1,244]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 244 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b8bf500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.915584564s of 10.366913795s, submitted: 140
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151797760 unmapped: 7249920 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 244 ms_handle_reset con 0x55b81dc12800 session 0x55b81caed180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 244 ms_handle_reset con 0x55b8218e0000 session 0x55b81caed500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 245 ms_handle_reset con 0x55b81dc13400 session 0x55b81db06380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 245 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81df056c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 245 ms_handle_reset con 0x55b81d85ac00 session 0x55b81b38a1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128241 data_alloc: 234881024 data_used: 26665999
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 153165824 unmapped: 5881856 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f824e000/0x0/0x4ffc00000, data 0x3abafc5/0x3c24000, compress 0x0/0x0/0x0, omap 0x3965f, meta 0x3d369a1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 245 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81dc12400 session 0x55b81de1ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 6856704 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81dc12800 session 0x55b81df688c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81d85ac00 session 0x55b81a771880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81dc12400 session 0x55b81d230e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 6856704 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81dc12800 session 0x55b81b38b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 247 ms_handle_reset con 0x55b81dc13400 session 0x55b81b38a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 6848512 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 247 ms_handle_reset con 0x55b8218e0400 session 0x55b81ab5d180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 6701056 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 248 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b8bf500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f8262000/0x0/0x4ffc00000, data 0x3abe5e0/0x3c2a000, compress 0x0/0x0/0x0, omap 0x39aa0, meta 0x3d36560), peers [0,2] op hist [0,0,0,0,0,1,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 248 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81a800380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2129186 data_alloc: 234881024 data_used: 26666682
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 248 ms_handle_reset con 0x55b81d85ac00 session 0x55b81de1a1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152354816 unmapped: 6692864 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 248 ms_handle_reset con 0x55b81dc12400 session 0x55b81d4f2a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152363008 unmapped: 6684672 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f825c000/0x0/0x4ffc00000, data 0x3ac01de/0x3c2e000, compress 0x0/0x0/0x0, omap 0x39c4b, meta 0x3d363b5), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152363008 unmapped: 6684672 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 249 ms_handle_reset con 0x55b81dc12800 session 0x55b81b887a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 249 ms_handle_reset con 0x55b81dc13400 session 0x55b81b887880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152182784 unmapped: 6864896 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f825c000/0x0/0x4ffc00000, data 0x3ac1cfa/0x3c2e000, compress 0x0/0x0/0x0, omap 0x3a369, meta 0x3d35c97), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152182784 unmapped: 6864896 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 249 ms_handle_reset con 0x55b81d85ac00 session 0x55b81df68540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.074841499s of 10.528203011s, submitted: 125
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f825c000/0x0/0x4ffc00000, data 0x3ac1cfa/0x3c2e000, compress 0x0/0x0/0x0, omap 0x3a369, meta 0x3d35c97), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2130229 data_alloc: 234881024 data_used: 26667274
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152166400 unmapped: 6881280 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152166400 unmapped: 6881280 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 250 ms_handle_reset con 0x55b8218e0800 session 0x55b81dfab6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152297472 unmapped: 6750208 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b38bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b8218e0c00 session 0x55b81b8bf880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81dfaba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f8253000/0x0/0x4ffc00000, data 0x3ac5a1d/0x3c37000, compress 0x0/0x0/0x0, omap 0x3a620, meta 0x3d359e0), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 6586368 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 6586368 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b81dc12400 session 0x55b81df68a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2140794 data_alloc: 234881024 data_used: 26923274
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b81dc12c00 session 0x55b81ac2a000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151543808 unmapped: 7503872 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 252 ms_handle_reset con 0x55b81d85ac00 session 0x55b81caece00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 253 ms_handle_reset con 0x55b81dc13400 session 0x55b81b7aa700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 253 ms_handle_reset con 0x55b81dc13c00 session 0x55b81ac04540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 253 ms_handle_reset con 0x55b81dc13800 session 0x55b81b86f340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151552000 unmapped: 7495680 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81df68700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81dc12400 session 0x55b81b38ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f824c000/0x0/0x4ffc00000, data 0x3ac8bb9/0x3c3c000, compress 0x0/0x0/0x0, omap 0x3aaac, meta 0x3d35554), peers [0,2] op hist [1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81d85ac00 session 0x55b81d1d88c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 13303808 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81bb33c00 session 0x55b81b443c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81dc13800 session 0x55b81d1d8c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81d85a400 session 0x55b81cef1340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 13303808 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 255 ms_handle_reset con 0x55b81dc13c00 session 0x55b81aa63180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 255 ms_handle_reset con 0x55b81de51000 session 0x55b81cef0c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 255 ms_handle_reset con 0x55b81dc12000 session 0x55b81ac2a1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 13303808 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.642595291s of 10.042186737s, submitted: 182
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81dc12c00 session 0x55b81ac4f500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b8be380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1813890 data_alloc: 218103808 data_used: 7188828
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81dc12000 session 0x55b81aa628c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81d85a400 session 0x55b81ac4e8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81dc13800 session 0x55b81de1a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 22650880 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 22650880 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 heartbeat osd_stat(store_statfs(0x4fa8da000/0x0/0x4ffc00000, data 0x120ea3a/0x1383000, compress 0x0/0x0/0x0, omap 0x3db2f, meta 0x3d324d1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81d85a400 session 0x55b81da39340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 22634496 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 heartbeat osd_stat(store_statfs(0x4fab07000/0x0/0x4ffc00000, data 0x120eaac/0x1385000, compress 0x0/0x0/0x0, omap 0x3db2f, meta 0x3d324d1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 257 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b38bc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 22634496 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 258 ms_handle_reset con 0x55b81dc12000 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 258 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b442a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 22634496 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 258 ms_handle_reset con 0x55b81de51000 session 0x55b81aa70fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 259 heartbeat osd_stat(store_statfs(0x4faafd000/0x0/0x4ffc00000, data 0x121223c/0x138b000, compress 0x0/0x0/0x0, omap 0x3dfe1, meta 0x3d3201f), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812999 data_alloc: 218103808 data_used: 6803212
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 22618112 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 260 heartbeat osd_stat(store_statfs(0x4faaf7000/0x0/0x4ffc00000, data 0x1215b02/0x1391000, compress 0x0/0x0/0x0, omap 0x3e623, meta 0x3d319dd), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136445952 unmapped: 22601728 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 260 ms_handle_reset con 0x55b81dc13c00 session 0x55b81b7aaa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 260 ms_handle_reset con 0x55b81dc12000 session 0x55b81d4f3dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 22593536 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 261 heartbeat osd_stat(store_statfs(0x4faaf7000/0x0/0x4ffc00000, data 0x1215ab0/0x1391000, compress 0x0/0x0/0x0, omap 0x3e623, meta 0x3d319dd), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 261 heartbeat osd_stat(store_statfs(0x4faaf2000/0x0/0x4ffc00000, data 0x12176bc/0x1394000, compress 0x0/0x0/0x0, omap 0x3eaad, meta 0x3d31553), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 22568960 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 262 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2a1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 262 ms_handle_reset con 0x55b81dc12c00 session 0x55b81da2efc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 22552576 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.671566010s of 10.007779121s, submitted: 194
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1825652 data_alloc: 218103808 data_used: 6803895
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 22528000 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 263 ms_handle_reset con 0x55b8218e0800 session 0x55b81caed180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 263 ms_handle_reset con 0x55b81d85a400 session 0x55b81df636c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 22528000 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 263 ms_handle_reset con 0x55b8218e1000 session 0x55b81b443180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 263 heartbeat osd_stat(store_statfs(0x4faaf3000/0x0/0x4ffc00000, data 0x121ae8c/0x1399000, compress 0x0/0x0/0x0, omap 0x3efd9, meta 0x3d31027), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137576448 unmapped: 21471232 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 263 heartbeat osd_stat(store_statfs(0x4faaf3000/0x0/0x4ffc00000, data 0x121ae8c/0x1399000, compress 0x0/0x0/0x0, omap 0x3efd9, meta 0x3d31027), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137592832 unmapped: 21454848 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 264 ms_handle_reset con 0x55b81dc12000 session 0x55b81b8bf880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 21446656 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1828649 data_alloc: 218103808 data_used: 6803895
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 264 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81dfaba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 264 ms_handle_reset con 0x55b81dc12c00 session 0x55b81ac04380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 21405696 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 265 ms_handle_reset con 0x55b81dc12c00 session 0x55b81a800700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 265 heartbeat osd_stat(store_statfs(0x4faaec000/0x0/0x4ffc00000, data 0x121e650/0x139c000, compress 0x0/0x0/0x0, omap 0x3faf9, meta 0x3d30507), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1829759 data_alloc: 218103808 data_used: 6803781
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.645518303s of 11.900569916s, submitted: 157
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 265 ms_handle_reset con 0x55b81d85a400 session 0x55b81d4f3880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 265 heartbeat osd_stat(store_statfs(0x4faaee000/0x0/0x4ffc00000, data 0x121e6c1/0x139e000, compress 0x0/0x0/0x0, omap 0x3fdbd, meta 0x3d30243), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 266 ms_handle_reset con 0x55b81dc12000 session 0x55b81db06380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 266 ms_handle_reset con 0x55b81dc13c00 session 0x55b81caec8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138108928 unmapped: 20938752 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b8218e1000 session 0x55b81aa63500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b81d85a400 session 0x55b81ac4fa40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b8218e1400 session 0x55b81da396c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138133504 unmapped: 20914176 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 267 heartbeat osd_stat(store_statfs(0x4faae3000/0x0/0x4ffc00000, data 0x122235d/0x13a5000, compress 0x0/0x0/0x0, omap 0x4042f, meta 0x3d2fbd1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1842649 data_alloc: 218103808 data_used: 6803879
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b81dc12c00 session 0x55b81d095340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81dc12000 session 0x55b81de1a8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138141696 unmapped: 20905984 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81de1b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138141696 unmapped: 20905984 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81d85a400 session 0x55b81b38ac40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81dc12c00 session 0x55b81ac4f500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 21348352 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81dc12000 session 0x55b81b443340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b8218e1400 session 0x55b81ced4700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81cef0c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138878976 unmapped: 27516928 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138878976 unmapped: 27516928 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b887500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 heartbeat osd_stat(store_statfs(0x4fa58b000/0x0/0x4ffc00000, data 0x17790a1/0x1901000, compress 0x0/0x0/0x0, omap 0x408b0, meta 0x3d2f750), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1897009 data_alloc: 218103808 data_used: 6803977
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 269 ms_handle_reset con 0x55b81dc12000 session 0x55b81b8bec40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 269 ms_handle_reset con 0x55b81dc13c00 session 0x55b81b4f2000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138903552 unmapped: 27492352 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 269 heartbeat osd_stat(store_statfs(0x4fa58a000/0x0/0x4ffc00000, data 0x1779103/0x1902000, compress 0x0/0x0/0x0, omap 0x408b0, meta 0x3d2f750), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 269 ms_handle_reset con 0x55b8218e1800 session 0x55b81aa62fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 270 heartbeat osd_stat(store_statfs(0x4fa584000/0x0/0x4ffc00000, data 0x177ad1d/0x1906000, compress 0x0/0x0/0x0, omap 0x40ff5, meta 0x3d2f00b), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 270 ms_handle_reset con 0x55b8218e1000 session 0x55b81b38b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 270 ms_handle_reset con 0x55b81dc12000 session 0x55b81da2fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138518528 unmapped: 27877376 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 270 ms_handle_reset con 0x55b81d85a400 session 0x55b81b443340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 270 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b4f2a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.375273705s of 10.478682518s, submitted: 139
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81dc13c00 session 0x55b81da39dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81dc12c00 session 0x55b81d1368c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81dc13c00 session 0x55b81a800700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138526720 unmapped: 27869184 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138526720 unmapped: 27869184 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81d85a400 session 0x55b81b38bc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81dc12000 session 0x55b81b86f880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138551296 unmapped: 27844608 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1908693 data_alloc: 218103808 data_used: 6804660
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 27828224 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b81dc12000 session 0x55b81ced4700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b8bf180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81d4f2c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b81dc13c00 session 0x55b81b8bfdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 272 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x2223127/0x23b3000, compress 0x0/0x0/0x0, omap 0x42ab9, meta 0x3d2d547), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b8218e1c00 session 0x55b81ac2b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138321920 unmapped: 34381824 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b8218e1000 session 0x55b81d4f2540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b81d85a400 session 0x55b81aa63a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac056c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138338304 unmapped: 34365440 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b81dc12000 session 0x55b81b442a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b81dc12c00 session 0x55b81da2efc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138346496 unmapped: 34357248 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138346496 unmapped: 34357248 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 274 ms_handle_reset con 0x55b81dc13c00 session 0x55b81caecc40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1988471 data_alloc: 218103808 data_used: 6805387
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138354688 unmapped: 34349056 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f9ad1000/0x0/0x4ffc00000, data 0x22268df/0x23b9000, compress 0x0/0x0/0x0, omap 0x4337e, meta 0x3d2cc82), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 275 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81aa71180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138403840 unmapped: 34299904 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 275 ms_handle_reset con 0x55b81d85a400 session 0x55b81df69500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.215954304s of 10.120274544s, submitted: 172
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138403840 unmapped: 34299904 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138354688 unmapped: 34349056 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f9ad0000/0x0/0x4ffc00000, data 0x2228308/0x23ba000, compress 0x0/0x0/0x0, omap 0x43b29, meta 0x3d2c4d7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140656640 unmapped: 32047104 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2053231 data_alloc: 234881024 data_used: 16820189
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140656640 unmapped: 32047104 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 276 ms_handle_reset con 0x55b81dc12c00 session 0x55b81d4f2000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f9acd000/0x0/0x4ffc00000, data 0x2229f14/0x23bd000, compress 0x0/0x0/0x0, omap 0x43edd, meta 0x3d2c123), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 277 ms_handle_reset con 0x55b81b8bd800 session 0x55b81aa62a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 277 ms_handle_reset con 0x55b81dc12000 session 0x55b81d370540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 32194560 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 32178176 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 277 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81df62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 32169984 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 32153600 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2059076 data_alloc: 234881024 data_used: 17344394
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 32129024 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81dc12c00 session 0x55b81a800fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f9aca000/0x0/0x4ffc00000, data 0x222d53d/0x23c2000, compress 0x0/0x0/0x0, omap 0x44843, meta 0x3d2b7bd), peers [0,2] op hist [0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140582912 unmapped: 32120832 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.583833694s of 10.010669708s, submitted: 96
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81d85a400 session 0x55b81b7abc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81b8bd400 session 0x55b81cef0c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81dc13c00 session 0x55b81ced56c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140632064 unmapped: 32071680 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b38a8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147136512 unmapped: 25567232 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 279 ms_handle_reset con 0x55b81dc12000 session 0x55b81d136000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 25165824 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 279 heartbeat osd_stat(store_statfs(0x4f8c4f000/0x0/0x4ffc00000, data 0x30a5c3a/0x323a000, compress 0x0/0x0/0x0, omap 0x44ee1, meta 0x3d2b11f), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 279 ms_handle_reset con 0x55b81d85a400 session 0x55b81aa62fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2157925 data_alloc: 234881024 data_used: 18143516
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147546112 unmapped: 25157632 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 279 ms_handle_reset con 0x55b81dc12c00 session 0x55b81da38c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 25837568 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 280 ms_handle_reset con 0x55b81d85a400 session 0x55b81dddda40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146939904 unmapped: 25763840 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 281 ms_handle_reset con 0x55b81dc12000 session 0x55b81da39180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146956288 unmapped: 25747456 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 282 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81da2fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 282 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81d230e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 282 ms_handle_reset con 0x55b81dc13c00 session 0x55b81dfaac40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147021824 unmapped: 25681920 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2122863 data_alloc: 234881024 data_used: 18290362
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 282 heartbeat osd_stat(store_statfs(0x4f93d9000/0x0/0x4ffc00000, data 0x291add7/0x2ab1000, compress 0x0/0x0/0x0, omap 0x45ab8, meta 0x3d2a548), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147021824 unmapped: 25681920 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 282 handle_osd_map epochs [284,284], i have 282, src has [1,284]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 282 handle_osd_map epochs [283,284], i have 282, src has [1,284]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147038208 unmapped: 25665536 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 284 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81da39c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147070976 unmapped: 25632768 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 284 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b38aa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147070976 unmapped: 25632768 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147079168 unmapped: 25624576 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 284 ms_handle_reset con 0x55b81d85a400 session 0x55b81ac4f340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.923368454s of 12.897541046s, submitted: 236
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2129695 data_alloc: 234881024 data_used: 18298554
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147079168 unmapped: 25624576 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 285 ms_handle_reset con 0x55b81dc12000 session 0x55b81b38b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x2920046/0x2aba000, compress 0x0/0x0/0x0, omap 0x4602d, meta 0x3d29fd3), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147111936 unmapped: 25591808 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 286 ms_handle_reset con 0x55b81de50800 session 0x55b81dfab340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147111936 unmapped: 25591808 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 286 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81a771dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147120128 unmapped: 25583616 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147120128 unmapped: 25583616 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 286 ms_handle_reset con 0x55b81b7d5000 session 0x55b81de1a000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2145526 data_alloc: 234881024 data_used: 18314938
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147120128 unmapped: 25583616 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147120128 unmapped: 25583616 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 286 ms_handle_reset con 0x55b81d85a400 session 0x55b81d4f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 287 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81df636c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 287 ms_handle_reset con 0x55b81dc12000 session 0x55b81b7ab6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f91c8000/0x0/0x4ffc00000, data 0x2b248eb/0x2cc2000, compress 0x0/0x0/0x0, omap 0x4698d, meta 0x3d29673), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147144704 unmapped: 25559040 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147169280 unmapped: 25534464 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 288 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81aa63180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 288 ms_handle_reset con 0x55b81b7d5000 session 0x55b81d231180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 288 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81a771340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 23117824 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 288 ms_handle_reset con 0x55b81d85a400 session 0x55b81d4f2c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.877266884s of 10.104944229s, submitted: 87
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 289 ms_handle_reset con 0x55b81e0e6000 session 0x55b81d4f3dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2261435 data_alloc: 234881024 data_used: 25721116
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 289 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b38a380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 289 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81aa62a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 289 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162783232 unmapped: 9920512 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81d85a400 session 0x55b81aa63c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81e0e6000 session 0x55b81a801880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162881536 unmapped: 9822208 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b7d5000 session 0x55b81d4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b8be380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 heartbeat osd_stat(store_statfs(0x4f77d5000/0x0/0x4ffc00000, data 0x3374061/0x3517000, compress 0x0/0x0/0x0, omap 0x48101, meta 0x4ec7eff), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81d243c00 session 0x55b81b86f340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81d85a400 session 0x55b81d4f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81de2c000 session 0x55b81d4f2700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b38b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 18513920 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81d1368c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ced4380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 18513920 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81de2d000 session 0x55b81b38aa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81d243c00 session 0x55b81ac2b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 18505728 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2254156 data_alloc: 234881024 data_used: 25722711
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154206208 unmapped: 18497536 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 291 ms_handle_reset con 0x55b81b7d5000 session 0x55b81d1376c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154222592 unmapped: 18481152 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81aa63180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 heartbeat osd_stat(store_statfs(0x4f77c8000/0x0/0x4ffc00000, data 0x337788b/0x3520000, compress 0x0/0x0/0x0, omap 0x494f4, meta 0x4ec6b0c), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac2bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81de2c000 session 0x55b81b8bf180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154099712 unmapped: 18604032 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81de2c000 session 0x55b81ab5d340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b8be000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81d370540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b887dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81d243c00 session 0x55b81df69a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154124288 unmapped: 18579456 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 293 ms_handle_reset con 0x55b81b7d5000 session 0x55b8190f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 293 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b86fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 293 ms_handle_reset con 0x55b81de2c000 session 0x55b81b8bfdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154140672 unmapped: 18563072 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.855691910s of 10.001125336s, submitted: 233
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2266773 data_alloc: 234881024 data_used: 25723037
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 18554880 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 293 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac05c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81de4e000 session 0x55b81ac2b340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154140672 unmapped: 18563072 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81cf19000 session 0x55b81caec700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f77c6000/0x0/0x4ffc00000, data 0x337af08/0x3524000, compress 0x0/0x0/0x0, omap 0x4a405, meta 0x4ec5bfb), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154140672 unmapped: 18563072 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81b7d5000 session 0x55b81de1ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 18554880 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f77c5000/0x0/0x4ffc00000, data 0x337af2b/0x3525000, compress 0x0/0x0/0x0, omap 0x4a405, meta 0x4ec5bfb), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81de2c000 session 0x55b81a8016c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f77c5000/0x0/0x4ffc00000, data 0x337af2b/0x3525000, compress 0x0/0x0/0x0, omap 0x4a489, meta 0x4ec5b77), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154607616 unmapped: 18096128 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81caedc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2274365 data_alloc: 234881024 data_used: 26831406
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 295 ms_handle_reset con 0x55b81d2acc00 session 0x55b81caed340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154542080 unmapped: 18161664 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 295 ms_handle_reset con 0x55b81d2acc00 session 0x55b81b38b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 295 handle_osd_map epochs [296,296], i have 296, src has [1,296]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 296 ms_handle_reset con 0x55b81b7d5000 session 0x55b81dfaa1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 296 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81da2efc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154550272 unmapped: 18153472 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f77be000/0x0/0x4ffc00000, data 0x337e552/0x352a000, compress 0x0/0x0/0x0, omap 0x4ab7d, meta 0x4ec5483), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154566656 unmapped: 18137088 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 296 ms_handle_reset con 0x55b81dfe7000 session 0x55b81d4f2c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81dfe7c00 session 0x55b81b38a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154624000 unmapped: 18079744 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81b7d5000 session 0x55b81caed880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81dfe7c00 session 0x55b81df68540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81d2acc00 session 0x55b81b38b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 18006016 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f77bd000/0x0/0x4ffc00000, data 0x3380142/0x352d000, compress 0x0/0x0/0x0, omap 0x4ae2d, meta 0x4ec51d3), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2282531 data_alloc: 234881024 data_used: 26835486
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 18006016 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.603608131s of 10.748081207s, submitted: 90
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81b442e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81dfe7000 session 0x55b81d1376c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 155762688 unmapped: 16941056 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 155762688 unmapped: 16941056 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 298 heartbeat osd_stat(store_statfs(0x4f77ba000/0x0/0x4ffc00000, data 0x3381bc1/0x3530000, compress 0x0/0x0/0x0, omap 0x4b3ce, meta 0x4ec4c32), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81b7d5000 session 0x55b81dfaac40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81dfe7000 session 0x55b81d1368c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81d2acc00 session 0x55b81b86f340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156164096 unmapped: 16539648 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81ac2bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81dfe7c00 session 0x55b81caec700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f76bb000/0x0/0x4ffc00000, data 0x347f75d/0x362f000, compress 0x0/0x0/0x0, omap 0x4b569, meta 0x4ec4a97), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81b7d5000 session 0x55b81a8016c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 23838720 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2372891 data_alloc: 251658240 data_used: 28237955
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81d2acc00 session 0x55b81df68380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 23838720 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81ac2b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81dfe7000 session 0x55b81b8868c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81b2b5000 session 0x55b81da2e000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156803072 unmapped: 23781376 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81b7d5000 session 0x55b81ac2b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81d2acc00 session 0x55b81b38a000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81da39dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156803072 unmapped: 23781376 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81dfe7000 session 0x55b81b887500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156819456 unmapped: 23764992 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81b2b4000 session 0x55b81df68fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 301 ms_handle_reset con 0x55b81ceb5800 session 0x55b81caed340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158056448 unmapped: 22528000 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f76b5000/0x0/0x4ffc00000, data 0x3482f05/0x3635000, compress 0x0/0x0/0x0, omap 0x4bdb1, meta 0x4ec424f), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 301 ms_handle_reset con 0x55b81b2b4000 session 0x55b81da39880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2317905 data_alloc: 251658240 data_used: 28237955
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 301 handle_osd_map epochs [301,302], i have 302, src has [1,302]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 302 ms_handle_reset con 0x55b81b7d5000 session 0x55b81a771340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 22470656 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 302 ms_handle_reset con 0x55b81d2acc00 session 0x55b81da2fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 22470656 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.525624275s of 10.878129005s, submitted: 80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 302 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81df681c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f76b0000/0x0/0x4ffc00000, data 0x3484af5/0x3638000, compress 0x0/0x0/0x0, omap 0x4bec9, meta 0x4ec4137), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158154752 unmapped: 22429696 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 302 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81d1d81c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158162944 unmapped: 22421504 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f76b3000/0x0/0x4ffc00000, data 0x3484b57/0x3639000, compress 0x0/0x0/0x0, omap 0x4bfd1, meta 0x4ec402f), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158203904 unmapped: 22380544 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2321464 data_alloc: 251658240 data_used: 28237955
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 22372352 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 303 ms_handle_reset con 0x55b81b2b4000 session 0x55b81caeca80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 22347776 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 304 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b8bfdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 22331392 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f76a8000/0x0/0x4ffc00000, data 0x34881aa/0x363f000, compress 0x0/0x0/0x0, omap 0x4c58a, meta 0x4ec3a76), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158326784 unmapped: 22257664 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 305 ms_handle_reset con 0x55b81ceb5800 session 0x55b81da38c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f76a8000/0x0/0x4ffc00000, data 0x3489d9a/0x3642000, compress 0x0/0x0/0x0, omap 0x4c9a9, meta 0x4ec3657), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f76a8000/0x0/0x4ffc00000, data 0x3489d9a/0x3642000, compress 0x0/0x0/0x0, omap 0x4c9a9, meta 0x4ec3657), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158334976 unmapped: 22249472 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2329986 data_alloc: 251658240 data_used: 28238053
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 305 ms_handle_reset con 0x55b81d2acc00 session 0x55b81ac2b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f76a8000/0x0/0x4ffc00000, data 0x3489d9a/0x3642000, compress 0x0/0x0/0x0, omap 0x4c9a9, meta 0x4ec3657), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158375936 unmapped: 22208512 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 22192128 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.073101997s of 10.037608147s, submitted: 70
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 306 ms_handle_reset con 0x55b81b2b4000 session 0x55b81d4f2540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 22192128 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 306 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b887dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 22175744 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 22175744 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2335478 data_alloc: 251658240 data_used: 28237955
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 22175744 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 307 heartbeat osd_stat(store_statfs(0x4f76a4000/0x0/0x4ffc00000, data 0x348b845/0x3646000, compress 0x0/0x0/0x0, omap 0x4cc4f, meta 0x4ec33b1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 22159360 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 307 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81b86f6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 307 ms_handle_reset con 0x55b81ceb5800 session 0x55b81df2ca80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158433280 unmapped: 22151168 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 307 ms_handle_reset con 0x55b81dfe7000 session 0x55b81b8bf880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158441472 unmapped: 22142976 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 308 ms_handle_reset con 0x55b81b7d5000 session 0x55b81df69dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 308 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81cef0c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 308 ms_handle_reset con 0x55b81ceb5800 session 0x55b81df69a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158474240 unmapped: 22110208 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 309 ms_handle_reset con 0x55b81b2b4000 session 0x55b81d4f2a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2351429 data_alloc: 251658240 data_used: 28238698
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f769c000/0x0/0x4ffc00000, data 0x348f027/0x364e000, compress 0x0/0x0/0x0, omap 0x4da20, meta 0x4ec25e0), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166903808 unmapped: 17883136 heap: 184786944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 309 ms_handle_reset con 0x55b81dfe8000 session 0x55b81caedc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 42876928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.840897560s of 10.072703362s, submitted: 73
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 309 ms_handle_reset con 0x55b81b2b4000 session 0x55b81ac04c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 171294720 unmapped: 30285824 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 309 ms_handle_reset con 0x55b81ceb5800 session 0x55b81caece00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f269b000/0x0/0x4ffc00000, data 0x8490ba5/0x864f000, compress 0x0/0x0/0x0, omap 0x4e052, meta 0x4ec1fae), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162988032 unmapped: 38592512 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 310 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81da38000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 161865728 unmapped: 39714816 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 310 ms_handle_reset con 0x55b81ae88000 session 0x55b81a771dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 310 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b7abc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3211625 data_alloc: 251658240 data_used: 31156545
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166166528 unmapped: 35414016 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81e0a3800 session 0x55b81a8016c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162086912 unmapped: 39493632 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b442c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 heartbeat osd_stat(store_statfs(0x4eb093000/0x0/0x4ffc00000, data 0xfa9555d/0xfc57000, compress 0x0/0x0/0x0, omap 0x4eb77, meta 0x4ec1489), peers [0,2] op hist [0,0,0,0,0,0,0,1,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166518784 unmapped: 35061760 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81ae88000 session 0x55b81d4f2000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81b2b4000 session 0x55b81aa70fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81ceb5800 session 0x55b81d4f2700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 30515200 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81ae88000 session 0x55b81d1376c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167092224 unmapped: 34488320 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004094 data_alloc: 251658240 data_used: 31050014
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81e0e1800 session 0x55b81b38b340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81d0adc00 session 0x55b81df696c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163020800 unmapped: 38559744 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81dfab340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81e0a3800 session 0x55b81cef1c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81b2b4000 session 0x55b81df68380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163151872 unmapped: 38428672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.410922527s of 10.001376152s, submitted: 236
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81d1d8e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81ae88000 session 0x55b81b8bf6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81d0adc00 session 0x55b81b4421c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163258368 unmapped: 38322176 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81e0e1800 session 0x55b81b7aa380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f7791000/0x0/0x4ffc00000, data 0x3399f1a/0x355a000, compress 0x0/0x0/0x0, omap 0x4fbe5, meta 0x4ec041b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81ae88000 session 0x55b81b38a8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac04c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81e0e1800 session 0x55b81de1a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81b2b4000 session 0x55b81b443180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160243712 unmapped: 41336832 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81d0adc00 session 0x55b81da38c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81ae88000 session 0x55b81da38000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 314 ms_handle_reset con 0x55b81d0adc00 session 0x55b81d371180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f7ac4000/0x0/0x4ffc00000, data 0x2b57c0f/0x2d17000, compress 0x0/0x0/0x0, omap 0x50583, meta 0x4ebfa7d), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 314 ms_handle_reset con 0x55b81b2b4000 session 0x55b81b8bf180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160268288 unmapped: 41312256 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2383543 data_alloc: 251658240 data_used: 27831667
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 41295872 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac048c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81e0e1800 session 0x55b81aa62380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 41222144 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 41222144 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81e0e4c00 session 0x55b81aa62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81e0e5400 session 0x55b81d4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81e0e1800 session 0x55b81caedc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81ae88000 session 0x55b81caec000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 53133312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 53133312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2199405 data_alloc: 234881024 data_used: 9970524
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 heartbeat osd_stat(store_statfs(0x4f915e000/0x0/0x4ffc00000, data 0x19cb312/0x1b8e000, compress 0x0/0x0/0x0, omap 0x514f5, meta 0x4ebeb0b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81b2b4000 session 0x55b81ac4ea80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148201472 unmapped: 53379072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81ae88000 session 0x55b81ac4f880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0e5400 session 0x55b81df681c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81da2f880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81d0adc00 session 0x55b81a800c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81df68000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f87b8000/0x0/0x4ffc00000, data 0x236ec09/0x2532000, compress 0x0/0x0/0x0, omap 0x51d16, meta 0x4ebe2ea), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2262318 data_alloc: 218103808 data_used: 7344476
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81ae88000 session 0x55b81dfaa380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b8be380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81d0adc00 session 0x55b81b38bc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0db800 session 0x55b81d4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.971589088s of 16.643096924s, submitted: 298
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0e5400 session 0x55b81d4f2a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81ae88000 session 0x55b81b86f340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81a8016c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bd400 session 0x55b81aa63880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81d0adc00 session 0x55b81dfaaa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0db800 session 0x55b81da39340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f821a000/0x0/0x4ffc00000, data 0x290cc94/0x2ad2000, compress 0x0/0x0/0x0, omap 0x51fec, meta 0x4ebe014), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81ae88000 session 0x55b81b8bec40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac4fdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f821a000/0x0/0x4ffc00000, data 0x290cccd/0x2ad2000, compress 0x0/0x0/0x0, omap 0x5202e, meta 0x4ebdfd2), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2297489 data_alloc: 218103808 data_used: 6820188
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bd400 session 0x55b81b442a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81d0adc00 session 0x55b81b38ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0e5400 session 0x55b81b8be000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2297489 data_alloc: 218103808 data_used: 6820188
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f821b000/0x0/0x4ffc00000, data 0x290cc6b/0x2ad1000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81ae88000 session 0x55b81d7156c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144769024 unmapped: 56811520 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144777216 unmapped: 56803328 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2331044 data_alloc: 234881024 data_used: 11585884
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f81f0000/0x0/0x4ffc00000, data 0x2936c8e/0x2afc000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f81f0000/0x0/0x4ffc00000, data 0x2936c8e/0x2afc000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 56369152 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2331044 data_alloc: 234881024 data_used: 11585884
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 56369152 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 56369152 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.454809189s of 18.707763672s, submitted: 55
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f81f0000/0x0/0x4ffc00000, data 0x2936c8e/0x2afc000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147931136 unmapped: 53649408 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147505152 unmapped: 54075392 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f7d80000/0x0/0x4ffc00000, data 0x2da5c8e/0x2f6b000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2380002 data_alloc: 234881024 data_used: 12450140
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f7d5b000/0x0/0x4ffc00000, data 0x2dcac8e/0x2f90000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2380386 data_alloc: 234881024 data_used: 12454236
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147587072 unmapped: 53993472 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147587072 unmapped: 53993472 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.821453094s of 10.782118797s, submitted: 65
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147587072 unmapped: 53993472 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147587072 unmapped: 53993472 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f7d59000/0x0/0x4ffc00000, data 0x2dcdc8e/0x2f93000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 53731328 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 317 ms_handle_reset con 0x55b81a12d800 session 0x55b81d1d8e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 317 ms_handle_reset con 0x55b81a12c400 session 0x55b81b86f6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2388659 data_alloc: 234881024 data_used: 12470636
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147881984 unmapped: 53698560 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 318 ms_handle_reset con 0x55b81bb32400 session 0x55b81dfaba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147881984 unmapped: 53698560 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147947520 unmapped: 53633024 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 319 ms_handle_reset con 0x55b81d49c400 session 0x55b81de1bc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f7d0d000/0x0/0x4ffc00000, data 0x2e13089/0x2fdf000, compress 0x0/0x0/0x0, omap 0x52ca2, meta 0x4ebd35e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147955712 unmapped: 53624832 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 319 ms_handle_reset con 0x55b81a12c400 session 0x55b81df68380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147980288 unmapped: 53600256 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 320 ms_handle_reset con 0x55b81a12d800 session 0x55b81da2e540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2398955 data_alloc: 234881024 data_used: 12471221
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 320 ms_handle_reset con 0x55b81ae88000 session 0x55b81ac4e380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147988480 unmapped: 53592064 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 321 ms_handle_reset con 0x55b81bb32400 session 0x55b81de1bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 321 ms_handle_reset con 0x55b81d49d000 session 0x55b81b442c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148127744 unmapped: 53452800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148127744 unmapped: 53452800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.738044739s of 10.135429382s, submitted: 87
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81a12c400 session 0x55b81b38ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f7d07000/0x0/0x4ffc00000, data 0x2e167b5/0x2fe3000, compress 0x0/0x0/0x0, omap 0x53ad3, meta 0x4ebc52d), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148160512 unmapped: 53420032 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x2e183a5/0x2fe6000, compress 0x0/0x0/0x0, omap 0x53acf, meta 0x4ebc531), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x2e183a5/0x2fe6000, compress 0x0/0x0/0x0, omap 0x53acf, meta 0x4ebc531), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 53387264 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81d0adc00 session 0x55b81ac056c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81a12d800 session 0x55b81da39880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81cef0c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81b8bd400 session 0x55b81b4f2000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2402597 data_alloc: 234881024 data_used: 12471221
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 56197120 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81b8bd400 session 0x55b81b38ac40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 322 handle_osd_map epochs [322,323], i have 323, src has [1,323]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 323 heartbeat osd_stat(store_statfs(0x4f8760000/0x0/0x4ffc00000, data 0x23bad9f/0x2588000, compress 0x0/0x0/0x0, omap 0x5441e, meta 0x4ebbbe2), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81a12c400 session 0x55b81caed180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81a12d800 session 0x55b81b443340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81da388c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 323 heartbeat osd_stat(store_statfs(0x4f8762000/0x0/0x4ffc00000, data 0x23badd2/0x258a000, compress 0x0/0x0/0x0, omap 0x5441e, meta 0x4ebbbe2), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2303138 data_alloc: 218103808 data_used: 7093632
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81bb32400 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81a12c400 session 0x55b81d4f2700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145424384 unmapped: 56156160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.591893196s of 10.222195625s, submitted: 113
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 324 ms_handle_reset con 0x55b81a12d800 session 0x55b81b8bf340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145424384 unmapped: 56156160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 324 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81d1d88c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 325 ms_handle_reset con 0x55b81b8bd400 session 0x55b81d4f28c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 325 ms_handle_reset con 0x55b81d49d400 session 0x55b81ac2a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 325 ms_handle_reset con 0x55b81a12c400 session 0x55b81ac4e1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f92fc000/0x0/0x4ffc00000, data 0x181b54e/0x19ec000, compress 0x0/0x0/0x0, omap 0x5503e, meta 0x4ebafc2), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144474112 unmapped: 57106432 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2242967 data_alloc: 218103808 data_used: 7093578
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 57237504 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f92fc000/0x0/0x4ffc00000, data 0x181b54e/0x19ec000, compress 0x0/0x0/0x0, omap 0x5503e, meta 0x4ebafc2), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81a12d800 session 0x55b81caec700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81d0adc00 session 0x55b81d715a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81ae88000 session 0x55b81d4f36c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81de1b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f92fb000/0x0/0x4ffc00000, data 0x181d135/0x19ee000, compress 0x0/0x0/0x0, omap 0x557e3, meta 0x4eba81d), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81a12c400 session 0x55b81caedc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81a12d800 session 0x55b81aa62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2241151 data_alloc: 218103808 data_used: 6832874
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81ae88000 session 0x55b81ac2bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81d0adc00 session 0x55b81d370fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81b8bd400 session 0x55b81ac4e540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12c400 session 0x55b81df696c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12d800 session 0x55b81b887a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 57827328 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81ae88000 session 0x55b81d370c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 57958400 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f87b9000/0x0/0x4ffc00000, data 0x2361b51/0x2533000, compress 0x0/0x0/0x0, omap 0x55de0, meta 0x4eba220), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.735248566s of 10.000020027s, submitted: 105
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81d0adc00 session 0x55b81a771340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81b8b7800 session 0x55b81de1a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 57819136 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12c400 session 0x55b81de1bc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 57819136 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f87b9000/0x0/0x4ffc00000, data 0x2361b51/0x2533000, compress 0x0/0x0/0x0, omap 0x55de0, meta 0x4eba220), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2314965 data_alloc: 218103808 data_used: 6837143
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 57819136 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12d800 session 0x55b81b8bf180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 57819136 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81ae88000 session 0x55b81d7156c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81d0adc00 session 0x55b81d371180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81b8b7400 session 0x55b81b442a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12c400 session 0x55b81caec700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12d800 session 0x55b81b4f3dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81ae88000 session 0x55b81a800540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81b8b7400 session 0x55b81b8bf880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 56647680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f80d7000/0x0/0x4ffc00000, data 0x2a40be6/0x2c15000, compress 0x0/0x0/0x0, omap 0x55f72, meta 0x4eba08e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 56647680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81e0a5000 session 0x55b81b86e700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2436351 data_alloc: 234881024 data_used: 18606487
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f80d2000/0x0/0x4ffc00000, data 0x2a42782/0x2c18000, compress 0x0/0x0/0x0, omap 0x56088, meta 0x4eb9f78), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12c400 session 0x55b81aa63a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.050866127s of 11.244613647s, submitted: 45
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12d800 session 0x55b81ac05dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156082176 unmapped: 45498368 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81ae88000 session 0x55b81b38a000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b1000 session 0x55b81d231180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81b8b7400 session 0x55b81d4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b0400 session 0x55b81ac2a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2610366 data_alloc: 234881024 data_used: 18606487
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152076288 unmapped: 49504256 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f60d3000/0x0/0x4ffc00000, data 0x4a427e4/0x4c19000, compress 0x0/0x0/0x0, omap 0x56634, meta 0x4eb99cc), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152109056 unmapped: 49471488 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f60d3000/0x0/0x4ffc00000, data 0x4a427e4/0x4c19000, compress 0x0/0x0/0x0, omap 0x56634, meta 0x4eb99cc), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152109056 unmapped: 49471488 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160849920 unmapped: 40730624 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12c400 session 0x55b81ac05180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 161832960 unmapped: 39747584 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12d800 session 0x55b81ab5ce00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2702600 data_alloc: 234881024 data_used: 19549079
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160407552 unmapped: 41172992 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81ae88000 session 0x55b81aa62380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f5339000/0x0/0x4ffc00000, data 0x57dc7e4/0x59b3000, compress 0x0/0x0/0x0, omap 0x56634, meta 0x4eb99cc), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81b8b7400 session 0x55b81d370700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160415744 unmapped: 41164800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160587776 unmapped: 40992768 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166952960 unmapped: 34627584 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81ae88000 session 0x55b81b38b340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12c400 session 0x55b81dfaba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12d800 session 0x55b81de1a540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166952960 unmapped: 34627584 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.234261513s of 10.919887543s, submitted: 137
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b0400 session 0x55b81dfaa540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2747274 data_alloc: 234881024 data_used: 26745751
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167264256 unmapped: 34316288 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b1000 session 0x55b81ac4e540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12c400 session 0x55b81aa62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f5316000/0x0/0x4ffc00000, data 0x57fe7e4/0x59d5000, compress 0x0/0x0/0x0, omap 0x5671f, meta 0x4eb98e1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167436288 unmapped: 34144256 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81ae88000 session 0x55b81caec380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12d800 session 0x55b81b38ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b0400 session 0x55b81de1aa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 33832960 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d243c00 session 0x55b81da2e700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 33701888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 329 ms_handle_reset con 0x55b81d243c00 session 0x55b81b38b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 33701888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2756034 data_alloc: 234881024 data_used: 26915700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 330 ms_handle_reset con 0x55b81a12d800 session 0x55b81d370380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 330 heartbeat osd_stat(store_statfs(0x4f52ee000/0x0/0x4ffc00000, data 0x5824381/0x59fc000, compress 0x0/0x0/0x0, omap 0x568fb, meta 0x4eb9705), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 170696704 unmapped: 30883840 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 330 ms_handle_reset con 0x55b81ae88000 session 0x55b81ac2b180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 330 ms_handle_reset con 0x55b81d4b0400 session 0x55b81caedc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 331 ms_handle_reset con 0x55b81d242800 session 0x55b81aa71500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169697280 unmapped: 31883264 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 331 ms_handle_reset con 0x55b81d242800 session 0x55b81caec000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169902080 unmapped: 31678464 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 332 ms_handle_reset con 0x55b81a12d800 session 0x55b81aa70c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169918464 unmapped: 31662080 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 332 ms_handle_reset con 0x55b81ae88000 session 0x55b81cb93c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 332 ms_handle_reset con 0x55b81d4b0400 session 0x55b81df2ca80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 333 ms_handle_reset con 0x55b81d243c00 session 0x55b81d4f3dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 333 ms_handle_reset con 0x55b81dfe7c00 session 0x55b81da2fdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 168656896 unmapped: 32923648 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.799221992s of 10.074870110s, submitted: 176
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 333 ms_handle_reset con 0x55b81a12d800 session 0x55b81a801880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2715384 data_alloc: 234881024 data_used: 24393076
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 334 ms_handle_reset con 0x55b81ae88000 session 0x55b81aa636c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 168665088 unmapped: 32915456 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81d242800 session 0x55b81d4f2540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f59ae000/0x0/0x4ffc00000, data 0x515c8c8/0x533a000, compress 0x0/0x0/0x0, omap 0x58535, meta 0x4eb7acb), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 168697856 unmapped: 32882688 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81a12c400 session 0x55b81ab5ce00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81b8b6c00 session 0x55b81b38b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81d0adc00 session 0x55b81aa71180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 159563776 unmapped: 42016768 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81a12d800 session 0x55b81de1ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81ae88000 session 0x55b81de1a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81d242800 session 0x55b81d4f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81d242800 session 0x55b81dfaaa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81dfe6000 session 0x55b81dfaa380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 157982720 unmapped: 43597824 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173514752 unmapped: 28065792 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2582725 data_alloc: 234881024 data_used: 11664209
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 35356672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 336 heartbeat osd_stat(store_statfs(0x4f563c000/0x0/0x4ffc00000, data 0x43238f7/0x4500000, compress 0x0/0x0/0x0, omap 0x58e30, meta 0x60571d0), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 336 ms_handle_reset con 0x55b81e0a3800 session 0x55b81b8bf6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166535168 unmapped: 35045376 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166535168 unmapped: 35045376 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 336 handle_osd_map epochs [336,337], i have 336, src has [1,337]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81de2d000 session 0x55b81ac2ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163782656 unmapped: 37797888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81dc13400 session 0x55b81de1ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81dc12800 session 0x55b81ac2b180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163782656 unmapped: 37797888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.958495140s of 10.005329132s, submitted: 242
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81d242800 session 0x55b81ac4e8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 337 heartbeat osd_stat(store_statfs(0x4f4bfa000/0x0/0x4ffc00000, data 0x4d70f6a/0x4f50000, compress 0x0/0x0/0x0, omap 0x597c1, meta 0x605683f), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2650452 data_alloc: 234881024 data_used: 11737937
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81de2d000 session 0x55b81cef1dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163782656 unmapped: 37797888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 338 heartbeat osd_stat(store_statfs(0x4f4bf8000/0x0/0x4ffc00000, data 0x4d7101c/0x4f52000, compress 0x0/0x0/0x0, omap 0x59c86, meta 0x605637a), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 338 ms_handle_reset con 0x55b81dfe6000 session 0x55b81da38540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163807232 unmapped: 37773312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 338 ms_handle_reset con 0x55b81e0a3800 session 0x55b81dfaba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163815424 unmapped: 37765120 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81e0a3800 session 0x55b81df2ca80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f4bf4000/0x0/0x4ffc00000, data 0x4d745f9/0x4f56000, compress 0x0/0x0/0x0, omap 0x5a957, meta 0x60556a9), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81dc12800 session 0x55b81d4f2700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d242800 session 0x55b81ac041c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163831808 unmapped: 37748736 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81de2d000 session 0x55b81d230c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d243000 session 0x55b81d231180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d242000 session 0x55b81d136000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d242800 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81dc12800 session 0x55b81df68e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 37666816 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81de2d000 session 0x55b81cb93c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2646478 data_alloc: 234881024 data_used: 11645777
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81e0a3800 session 0x55b81d4f2540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d242000 session 0x55b81b38b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 37666816 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f4c1b000/0x0/0x4ffc00000, data 0x4d505ea/0x4f31000, compress 0x0/0x0/0x0, omap 0x5ab49, meta 0x60554b7), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 37666816 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81de2d000 session 0x55b81da38000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 165101568 unmapped: 36478976 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81dc13800 session 0x55b81ac05dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81dfe6000 session 0x55b81dddda40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 165109760 unmapped: 36470784 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 340 heartbeat osd_stat(store_statfs(0x4f4c14000/0x0/0x4ffc00000, data 0x4d52266/0x4f36000, compress 0x0/0x0/0x0, omap 0x5b920, meta 0x60546e0), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164716544 unmapped: 36864000 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81dc12c00 session 0x55b81df68380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.531381607s of 10.002674103s, submitted: 119
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81d242000 session 0x55b81ac4f6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2715559 data_alloc: 234881024 data_used: 21496246
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 340 handle_osd_map epochs [340,341], i have 341, src has [1,341]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164675584 unmapped: 36904960 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 341 ms_handle_reset con 0x55b81de2d000 session 0x55b81df69a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81dfe6000 session 0x55b81b86f6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164683776 unmapped: 36896768 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81dc13c00 session 0x55b81de1bc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81dc12000 session 0x55b81caecc40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 36880384 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f4c0e000/0x0/0x4ffc00000, data 0x4d55811/0x4f3a000, compress 0x0/0x0/0x0, omap 0x5c5fb, meta 0x6053a05), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81d242000 session 0x55b81d094c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164782080 unmapped: 36798464 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81dc13c00 session 0x55b81d370380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f4c0e000/0x0/0x4ffc00000, data 0x4d55811/0x4f3a000, compress 0x0/0x0/0x0, omap 0x5c5fb, meta 0x6053a05), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164790272 unmapped: 36790272 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718561 data_alloc: 234881024 data_used: 21496148
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164790272 unmapped: 36790272 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172425216 unmapped: 29155328 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f4a75000/0x0/0x4ffc00000, data 0x4ef37af/0x50d7000, compress 0x0/0x0/0x0, omap 0x5c367, meta 0x6053c99), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172523520 unmapped: 29057024 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f44ef000/0x0/0x4ffc00000, data 0x54797af/0x565d000, compress 0x0/0x0/0x0, omap 0x5c367, meta 0x6053c99), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2778079 data_alloc: 234881024 data_used: 22271316
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.351642609s of 10.747139931s, submitted: 192
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f44ea000/0x0/0x4ffc00000, data 0x547b22e/0x5660000, compress 0x0/0x0/0x0, omap 0x5c501, meta 0x6053aff), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 343 ms_handle_reset con 0x55b81de2d000 session 0x55b81df636c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 343 ms_handle_reset con 0x55b81dfe6000 session 0x55b81ac2b340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783380 data_alloc: 234881024 data_used: 22271316
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172589056 unmapped: 28991488 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 343 ms_handle_reset con 0x55b81b889c00 session 0x55b81d4f28c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172777472 unmapped: 28803072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 344 ms_handle_reset con 0x55b81b889c00 session 0x55b81b8bf180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 28794880 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 345 ms_handle_reset con 0x55b81d242000 session 0x55b81b886000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 28794880 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 345 ms_handle_reset con 0x55b81d1f8000 session 0x55b81ac048c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 28794880 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 346 heartbeat osd_stat(store_statfs(0x4f44e0000/0x0/0x4ffc00000, data 0x54809d8/0x566a000, compress 0x0/0x0/0x0, omap 0x5d077, meta 0x6052f89), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 346 ms_handle_reset con 0x55b81dc13c00 session 0x55b81ac05dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2796399 data_alloc: 234881024 data_used: 22382420
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 28786688 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.081931114s of 10.227938652s, submitted: 53
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 346 ms_handle_reset con 0x55b81de2d000 session 0x55b81cb93c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 28770304 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 28770304 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 346 ms_handle_reset con 0x55b81d1f8000 session 0x55b81a7708c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 347 ms_handle_reset con 0x55b81d242000 session 0x55b81df68fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 347 heartbeat osd_stat(store_statfs(0x4f44df000/0x0/0x4ffc00000, data 0x5483566/0x566d000, compress 0x0/0x0/0x0, omap 0x5d7cd, meta 0x6052833), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172826624 unmapped: 28753920 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 348 ms_handle_reset con 0x55b81dc13c00 session 0x55b81d715c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 348 ms_handle_reset con 0x55b81b889c00 session 0x55b81da38540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172834816 unmapped: 28745728 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f44d5000/0x0/0x4ffc00000, data 0x5486d9a/0x5675000, compress 0x0/0x0/0x0, omap 0x5df33, meta 0x60520cd), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2808395 data_alloc: 234881024 data_used: 22382420
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172834816 unmapped: 28745728 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f44d5000/0x0/0x4ffc00000, data 0x5486d9a/0x5675000, compress 0x0/0x0/0x0, omap 0x5df33, meta 0x60520cd), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172834816 unmapped: 28745728 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 349 heartbeat osd_stat(store_statfs(0x4f44d8000/0x0/0x4ffc00000, data 0x5486d38/0x5674000, compress 0x0/0x0/0x0, omap 0x5e0f8, meta 0x6051f08), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 349 ms_handle_reset con 0x55b81dfe6000 session 0x55b81caeda40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172900352 unmapped: 28680192 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173015040 unmapped: 28565504 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 349 ms_handle_reset con 0x55b81dfe6000 session 0x55b81df056c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173015040 unmapped: 28565504 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2825658 data_alloc: 234881024 data_used: 23587328
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 350 heartbeat osd_stat(store_statfs(0x4f44d2000/0x0/0x4ffc00000, data 0x548898a/0x5678000, compress 0x0/0x0/0x0, omap 0x5e471, meta 0x6051b8f), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173023232 unmapped: 28557312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.445651054s of 10.584459305s, submitted: 81
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 350 ms_handle_reset con 0x55b81d1f8000 session 0x55b81b442e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173023232 unmapped: 28557312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 351 ms_handle_reset con 0x55b81d242000 session 0x55b81de1b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 351 ms_handle_reset con 0x55b81de50400 session 0x55b81b886e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f44cd000/0x0/0x4ffc00000, data 0x548a47b/0x567d000, compress 0x0/0x0/0x0, omap 0x5ec0f, meta 0x60513f1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173088768 unmapped: 28491776 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 351 handle_osd_map epochs [352,352], i have 352, src has [1,352]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 352 ms_handle_reset con 0x55b81dc13c00 session 0x55b81aa71180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 352 ms_handle_reset con 0x55b81de4f400 session 0x55b81d715880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 352 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac04e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173113344 unmapped: 28467200 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 352 ms_handle_reset con 0x55b81d242000 session 0x55b81ac2aa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173113344 unmapped: 28467200 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 353 ms_handle_reset con 0x55b81de50400 session 0x55b81dfaa000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2851826 data_alloc: 234881024 data_used: 23583477
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172924928 unmapped: 28655616 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 354 ms_handle_reset con 0x55b81dfe6000 session 0x55b81b886a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 354 ms_handle_reset con 0x55b81b889c00 session 0x55b81d370c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 354 ms_handle_reset con 0x55b81d1f8000 session 0x55b81da2f880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 28631040 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f44bb000/0x0/0x4ffc00000, data 0x5491a85/0x568f000, compress 0x0/0x0/0x0, omap 0x5fa6a, meta 0x6050596), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 28631040 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f44bb000/0x0/0x4ffc00000, data 0x5491a85/0x568f000, compress 0x0/0x0/0x0, omap 0x5fa6a, meta 0x6050596), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 354 ms_handle_reset con 0x55b81de4f400 session 0x55b81b4f3180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 27525120 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 354 handle_osd_map epochs [354,355], i have 355, src has [1,355]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 355 ms_handle_reset con 0x55b81dfe9800 session 0x55b81b887500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173973504 unmapped: 27607040 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81de50400 session 0x55b81ac4e540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81b889c00 session 0x55b81a8008c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81d242000 session 0x55b81aa63c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2891557 data_alloc: 234881024 data_used: 23587589
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 27566080 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 356 heartbeat osd_stat(store_statfs(0x4f44ad000/0x0/0x4ffc00000, data 0x56c5772/0x569b000, compress 0x0/0x0/0x0, omap 0x6002a, meta 0x604ffd6), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81de4f400 session 0x55b81b8bea80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.216485977s of 10.411639214s, submitted: 68
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 27566080 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81dfe9800 session 0x55b81da39dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81de50400 session 0x55b81a800700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 357 ms_handle_reset con 0x55b81b889c00 session 0x55b81ab5d180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174071808 unmapped: 27508736 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 358 ms_handle_reset con 0x55b81d242000 session 0x55b81caec000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 358 ms_handle_reset con 0x55b81de4f400 session 0x55b81d4f2380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 358 ms_handle_reset con 0x55b81d1f8000 session 0x55b81b86e8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 27500544 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174088192 unmapped: 27492352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899770 data_alloc: 234881024 data_used: 23587687
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174112768 unmapped: 27467776 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 359 ms_handle_reset con 0x55b81de50400 session 0x55b8190f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 359 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac04c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 359 ms_handle_reset con 0x55b81d1f8000 session 0x55b81b38a000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174120960 unmapped: 27459584 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f44a7000/0x0/0x4ffc00000, data 0x56caa38/0x56a3000, compress 0x0/0x0/0x0, omap 0x60eac, meta 0x604f154), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 359 ms_handle_reset con 0x55b81de4f400 session 0x55b81d715a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 360 ms_handle_reset con 0x55b81d242000 session 0x55b81ced4700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 360 ms_handle_reset con 0x55b81dfe9800 session 0x55b81d4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174170112 unmapped: 27410432 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 360 ms_handle_reset con 0x55b81b889c00 session 0x55b81b7aba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 360 ms_handle_reset con 0x55b81d1f8000 session 0x55b81d4f28c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173875200 unmapped: 27705344 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f447d000/0x0/0x4ffc00000, data 0x56f6137/0x56cf000, compress 0x0/0x0/0x0, omap 0x613d2, meta 0x604ec2e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173957120 unmapped: 27623424 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 361 ms_handle_reset con 0x55b81d6db800 session 0x55b81df68c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 361 ms_handle_reset con 0x55b81ceb5c00 session 0x55b81d714700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908949 data_alloc: 234881024 data_used: 23683431
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174145536 unmapped: 27435008 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 362 ms_handle_reset con 0x55b81b428800 session 0x55b81de1a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 362 heartbeat osd_stat(store_statfs(0x4f4476000/0x0/0x4ffc00000, data 0x56f93b4/0x56d2000, compress 0x0/0x0/0x0, omap 0x61efc, meta 0x604e104), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.309225082s of 10.023455620s, submitted: 168
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174161920 unmapped: 27418624 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 27385856 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174268416 unmapped: 27312128 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 365 ms_handle_reset con 0x55b81b8bd800 session 0x55b81d370000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 365 ms_handle_reset con 0x55b81d0acc00 session 0x55b81aa63340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 365 ms_handle_reset con 0x55b81b889c00 session 0x55b81df696c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f446e000/0x0/0x4ffc00000, data 0x56fe6d2/0x56d8000, compress 0x0/0x0/0x0, omap 0x62857, meta 0x604d7a9), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 366 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f3500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174317568 unmapped: 27262976 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 366 ms_handle_reset con 0x55b81ceb5c00 session 0x55b81d4f3dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938222 data_alloc: 234881024 data_used: 23818488
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 366 heartbeat osd_stat(store_statfs(0x4f4455000/0x0/0x4ffc00000, data 0x57472ee/0x56f5000, compress 0x0/0x0/0x0, omap 0x62de4, meta 0x604d21c), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174317568 unmapped: 27262976 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f4456000/0x0/0x4ffc00000, data 0x574728c/0x56f4000, compress 0x0/0x0/0x0, omap 0x62e6a, meta 0x604d196), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174325760 unmapped: 27254784 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 368 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bf500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 368 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac2b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175423488 unmapped: 26157056 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 369 ms_handle_reset con 0x55b81b8bd800 session 0x55b81b86fdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 369 ms_handle_reset con 0x55b81d0acc00 session 0x55b81ac2b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 26140672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 22945792 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 369 ms_handle_reset con 0x55b81d1f8000 session 0x55b81dfaa1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 369 ms_handle_reset con 0x55b81b428800 session 0x55b81b7aa380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2713414 data_alloc: 234881024 data_used: 18809864
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f6b49000/0x0/0x4ffc00000, data 0x305100c/0x3001000, compress 0x0/0x0/0x0, omap 0x646ed, meta 0x604b913), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.742566109s of 13.166366577s, submitted: 235
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2725704 data_alloc: 234881024 data_used: 19219464
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 370 ms_handle_reset con 0x55b81b889c00 session 0x55b81b886e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 370 ms_handle_reset con 0x55b81b8bd800 session 0x55b81d0956c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173252608 unmapped: 28327936 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f6adb000/0x0/0x4ffc00000, data 0x30be08e/0x3071000, compress 0x0/0x0/0x0, omap 0x647f9, meta 0x604b807), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172777472 unmapped: 28803072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d0acc00 session 0x55b81df68540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d6db800 session 0x55b81b7aaa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172777472 unmapped: 28803072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b428800 session 0x55b81da2f340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac2b180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b8bd800 session 0x55b81da38540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172777472 unmapped: 28803072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d0acc00 session 0x55b81df636c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b429000 session 0x55b81ac4e8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b428800 session 0x55b81df68380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 28663808 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b889c00 session 0x55b81b4f2a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2731675 data_alloc: 234881024 data_used: 20661237
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 26615808 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b8bd800 session 0x55b81b86fdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 26615808 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad6000/0x0/0x4ffc00000, data 0x30c1b0d/0x3076000, compress 0x0/0x0/0x0, omap 0x64f35, meta 0x604b0cb), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d0acc00 session 0x55b81b7aba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174972928 unmapped: 26607616 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81e0db000 session 0x55b81b886e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b428800 session 0x55b81da2f340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175104000 unmapped: 26476544 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b889c00 session 0x55b81caecfc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175104000 unmapped: 26476544 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.099312782s of 10.017537117s, submitted: 66
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b8bd800 session 0x55b81aa71500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2729329 data_alloc: 234881024 data_used: 20661237
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 26468352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 26468352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad9000/0x0/0x4ffc00000, data 0x30c1a8b/0x3073000, compress 0x0/0x0/0x0, omap 0x65574, meta 0x604aa8c), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 26468352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d0acc00 session 0x55b81a771880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 26468352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175611904 unmapped: 25968640 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad9000/0x0/0x4ffc00000, data 0x30c1a8b/0x3073000, compress 0x0/0x0/0x0, omap 0x65574, meta 0x604aa8c), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2736838 data_alloc: 234881024 data_used: 22393845
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175611904 unmapped: 25968640 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad9000/0x0/0x4ffc00000, data 0x30c1a8b/0x3073000, compress 0x0/0x0/0x0, omap 0x65574, meta 0x604aa8c), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d94fc00 session 0x55b81b7aa700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 25903104 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 25903104 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d242000 session 0x55b81ac2a000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81de4f400 session 0x55b81b8861c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad9000/0x0/0x4ffc00000, data 0x30c1a8b/0x3073000, compress 0x0/0x0/0x0, omap 0x657a2, meta 0x604a85e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b889c00 session 0x55b81b4421c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175702016 unmapped: 25878528 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81b428800 session 0x55b81b86e8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 25821184 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81b8bd800 session 0x55b81de1a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2729987 data_alloc: 234881024 data_used: 22264821
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.530011177s of 10.641107559s, submitted: 55
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bf500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81d242000 session 0x55b81b7aaa80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81b889c00 session 0x55b81b443880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175775744 unmapped: 25804800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175775744 unmapped: 25804800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 373 ms_handle_reset con 0x55b81de4f400 session 0x55b81ac2afc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175792128 unmapped: 25788416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 373 ms_handle_reset con 0x55b81d0acc00 session 0x55b8190f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 373 heartbeat osd_stat(store_statfs(0x4f6ebc000/0x0/0x4ffc00000, data 0x2cda1c3/0x2c8e000, compress 0x0/0x0/0x0, omap 0x66841, meta 0x60497bf), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175792128 unmapped: 25788416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81d242000 session 0x55b81df62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81b889c00 session 0x55b81b38a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81b428800 session 0x55b81caeda40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173539328 unmapped: 28041216 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81d242800 session 0x55b81d1d88c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81dc12800 session 0x55b81ac4f880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2682980 data_alloc: 234881024 data_used: 19340179
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac05dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173604864 unmapped: 27975680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b428800 session 0x55b81ac04540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f6ed7000/0x0/0x4ffc00000, data 0x2a65db3/0x2c75000, compress 0x0/0x0/0x0, omap 0x674c0, meta 0x6048b40), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173637632 unmapped: 27942912 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242000 session 0x55b81aa70c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242800 session 0x55b81b8be380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81de4f400 session 0x55b81b38bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b428800 session 0x55b81de1ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173260800 unmapped: 32522240 heap: 205783040 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242000 session 0x55b81d714380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242800 session 0x55b81ac2b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b889c00 session 0x55b81d095340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81cf18000 session 0x55b81ac4e540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 45023232 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bf180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b889c00 session 0x55b81b442e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 45293568 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2632346 data_alloc: 218103808 data_used: 6454163
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242000 session 0x55b81b86e8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f650b000/0x0/0x4ffc00000, data 0x3431909/0x3641000, compress 0x0/0x0/0x0, omap 0x68669, meta 0x6047997), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.423376083s of 10.308360100s, submitted: 193
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242800 session 0x55b81d393880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 45293568 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f650b000/0x0/0x4ffc00000, data 0x3431909/0x3641000, compress 0x0/0x0/0x0, omap 0x68669, meta 0x6047997), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 45293568 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81de8a000 session 0x55b81a800000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b889c00 session 0x55b81b886a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81d242000 session 0x55b81d4f3dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81d13c800 session 0x55b8190f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81b428800 session 0x55b81caeda40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163561472 unmapped: 46424064 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81d242800 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81b889c00 session 0x55b81df2ca80
Jan 31 00:04:56 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19188 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 00:04:56 np0005603435 ceph-mgr[75599]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 00:04:56 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: 2026-01-31T05:04:56.087+0000 7f77961f6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163569664 unmapped: 46415872 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 376 handle_osd_map epochs [376,377], i have 377, src has [1,377]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 377 ms_handle_reset con 0x55b81d13c800 session 0x55b81d3716c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 377 ms_handle_reset con 0x55b81b428800 session 0x55b81dfabc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 377 heartbeat osd_stat(store_statfs(0x4f6508000/0x0/0x4ffc00000, data 0x3433684/0x3642000, compress 0x0/0x0/0x0, omap 0x68d39, meta 0x60472c7), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163602432 unmapped: 46383104 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2636069 data_alloc: 218103808 data_used: 6454133
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 378 ms_handle_reset con 0x55b81d242000 session 0x55b81b442540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 46366720 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 378 ms_handle_reset con 0x55b81b889800 session 0x55b81b8bf500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 46366720 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 378 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163627008 unmapped: 46358528 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81b889c00 session 0x55b81d715dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163627008 unmapped: 46358528 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81d13c800 session 0x55b81a771880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81d242000 session 0x55b81d715880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b443880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162848768 unmapped: 47136768 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f64fd000/0x0/0x4ffc00000, data 0x34389b0/0x364d000, compress 0x0/0x0/0x0, omap 0x69d6c, meta 0x6046294), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81d13c800 session 0x55b81de1b180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81b889c00 session 0x55b81d095dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2647078 data_alloc: 218103808 data_used: 6847349
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.696597099s of 10.054694176s, submitted: 176
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162848768 unmapped: 47136768 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 380 ms_handle_reset con 0x55b81d242000 session 0x55b81df68540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f64fd000/0x0/0x4ffc00000, data 0x34389b0/0x364d000, compress 0x0/0x0/0x0, omap 0x69df2, meta 0x604620e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81dfe3000 session 0x55b81d095c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163905536 unmapped: 46080000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163905536 unmapped: 46080000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81b2b4400 session 0x55b81a801a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81e0a3400 session 0x55b81a800a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81b889c00 session 0x55b81de1ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81d13c800 session 0x55b81df68c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81d242000 session 0x55b81d715340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164773888 unmapped: 45211648 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81dfe3000 session 0x55b81df696c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81b889c00 session 0x55b81d714540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81d13c800 session 0x55b81df68fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164782080 unmapped: 45203456 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f5f78000/0x0/0x4ffc00000, data 0x39bb075/0x3bd4000, compress 0x0/0x0/0x0, omap 0x6a6ed, meta 0x6045913), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2739680 data_alloc: 234881024 data_used: 14774304
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 382 ms_handle_reset con 0x55b81d242000 session 0x55b81ac04380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 382 ms_handle_reset con 0x55b81e0a3400 session 0x55b81b4f3500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164790272 unmapped: 45195264 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164790272 unmapped: 45195264 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 382 ms_handle_reset con 0x55b81de8b400 session 0x55b81d095a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 382 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac041c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81d13c800 session 0x55b81b4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164798464 unmapped: 45187072 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81de8b400 session 0x55b81b8861c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164839424 unmapped: 45146112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81d242000 session 0x55b81de1ae00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81de8a800 session 0x55b81cef01c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81b889c00 session 0x55b81cef1500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164839424 unmapped: 45146112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81e0a3400 session 0x55b81a801a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2746360 data_alloc: 234881024 data_used: 14774206
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81d13c800 session 0x55b81df2ca80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.840722084s of 10.014692307s, submitted: 90
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81d242000 session 0x55b81caeda40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164839424 unmapped: 45146112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f5f6f000/0x0/0x4ffc00000, data 0x39c0210/0x3bdb000, compress 0x0/0x0/0x0, omap 0x6afe2, meta 0x604501e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 170393600 unmapped: 39591936 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81de8b400 session 0x55b81d095340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 168394752 unmapped: 41590784 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169025536 unmapped: 40960000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f53b7000/0x0/0x4ffc00000, data 0x4579220/0x4795000, compress 0x0/0x0/0x0, omap 0x6b0ee, meta 0x6044f12), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81b889c00 session 0x55b81b4421c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f53b7000/0x0/0x4ffc00000, data 0x4579220/0x4795000, compress 0x0/0x0/0x0, omap 0x6b0ee, meta 0x6044f12), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 385 ms_handle_reset con 0x55b81de8b400 session 0x55b81d095180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169328640 unmapped: 40656896 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2824410 data_alloc: 234881024 data_used: 14857166
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 40345600 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f538b000/0x0/0x4ffc00000, data 0x45a0857/0x47bf000, compress 0x0/0x0/0x0, omap 0x6b797, meta 0x6044869), peers [0,2] op hist [1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81cf1a000 session 0x55b81d1d81c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81e0a3400 session 0x55b81dddc380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f538b000/0x0/0x4ffc00000, data 0x45a0857/0x47bf000, compress 0x0/0x0/0x0, omap 0x6b797, meta 0x6044869), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81dd5b800 session 0x55b81b442a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81b889c00 session 0x55b81aa63180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81cf1a000 session 0x55b81d4f2540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81e0a3400 session 0x55b81b7aa380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f538a000/0x0/0x4ffc00000, data 0x45a08b9/0x47c0000, compress 0x0/0x0/0x0, omap 0x6bb11, meta 0x60444ef), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 387 ms_handle_reset con 0x55b81de2d800 session 0x55b81a771500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 387 ms_handle_reset con 0x55b81de8b400 session 0x55b81da38c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2857856 data_alloc: 234881024 data_used: 19390000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.529183388s of 10.785122871s, submitted: 102
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 387 ms_handle_reset con 0x55b81de8b400 session 0x55b81d0941c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172113920 unmapped: 37871616 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81b889c00 session 0x55b81d094a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81b428800 session 0x55b81de1b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81cf1a000 session 0x55b81b887dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172171264 unmapped: 37814272 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81de2d800 session 0x55b81df68000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81de2d800 session 0x55b81d094380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173367296 unmapped: 36618240 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81b428800 session 0x55b81cef1340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f5385000/0x0/0x4ffc00000, data 0x45a4102/0x47c7000, compress 0x0/0x0/0x0, omap 0x6c8ae, meta 0x6043752), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81b889c00 session 0x55b81d094380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886268 data_alloc: 234881024 data_used: 19543322
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 388 handle_osd_map epochs [388,389], i have 389, src has [1,389]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173383680 unmapped: 36601856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 389 ms_handle_reset con 0x55b81cf1a000 session 0x55b81a771500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 389 ms_handle_reset con 0x55b81de8b400 session 0x55b81a801a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2890210 data_alloc: 234881024 data_used: 19539128
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f5233000/0x0/0x4ffc00000, data 0x46e9b1f/0x490d000, compress 0x0/0x0/0x0, omap 0x6cc07, meta 0x60433f9), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922306061s of 10.085712433s, submitted: 109
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f5233000/0x0/0x4ffc00000, data 0x46e9b1f/0x490d000, compress 0x0/0x0/0x0, omap 0x6cc07, meta 0x60433f9), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 37199872 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81de8b400 session 0x55b81d393880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d13c800 session 0x55b81caed340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d242000 session 0x55b81dddd6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 37199872 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884517 data_alloc: 234881024 data_used: 19434699
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81b428800 session 0x55b81de1b6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 37191680 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81b889c00 session 0x55b81df68000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d13c800 session 0x55b81dfaa8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81b428800 session 0x55b81ac2a000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d242000 session 0x55b81b7aa380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81de8b400 session 0x55b81b8876c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172630016 unmapped: 37355520 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f525e000/0x0/0x4ffc00000, data 0x46c770d/0x48ee000, compress 0x0/0x0/0x0, omap 0x6da29, meta 0x60425d7), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81cf1a000 session 0x55b81caedc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172646400 unmapped: 37339136 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f525e000/0x0/0x4ffc00000, data 0x46c76bb/0x48ec000, compress 0x0/0x0/0x0, omap 0x6d920, meta 0x60426e0), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d13c800 session 0x55b81ac04e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172670976 unmapped: 37314560 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 391 ms_handle_reset con 0x55b81b428800 session 0x55b81ac4e540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f525b000/0x0/0x4ffc00000, data 0x46c92ab/0x48ef000, compress 0x0/0x0/0x0, omap 0x6e2c3, meta 0x6041d3d), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172670976 unmapped: 37314560 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81d242000 session 0x55b81b442540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81de8b400 session 0x55b81ac04fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2891661 data_alloc: 234881024 data_used: 19434699
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81de2d800 session 0x55b81b443880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81b428800 session 0x55b81da2f6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174030848 unmapped: 35954688 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174030848 unmapped: 35954688 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81de8b400 session 0x55b81b86fdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.500554085s of 10.814825058s, submitted: 107
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81de88800 session 0x55b81b8bf6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81e0a3400 session 0x55b81dfaa000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81ceb5800 session 0x55b81ac2afc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f522f000/0x0/0x4ffc00000, data 0x46ef296/0x4919000, compress 0x0/0x0/0x0, omap 0x6e3c2, meta 0x6041c3e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 35930112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 35930112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174071808 unmapped: 35913728 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 392 handle_osd_map epochs [392,393], i have 393, src has [1,393]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 393 ms_handle_reset con 0x55b81b428800 session 0x55b81ac4e380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905841 data_alloc: 234881024 data_used: 19491531
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175120384 unmapped: 34865152 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 394 ms_handle_reset con 0x55b81ceb5800 session 0x55b81aa63340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 394 ms_handle_reset con 0x55b81de88800 session 0x55b81aa62700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 394 ms_handle_reset con 0x55b81de8b400 session 0x55b81b38ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 34545664 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f5203000/0x0/0x4ffc00000, data 0x4716951/0x4945000, compress 0x0/0x0/0x0, omap 0x6f43b, meta 0x6040bc5), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 34545664 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 394 ms_handle_reset con 0x55b81de50000 session 0x55b81ac2bc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175480832 unmapped: 34504704 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81de50000 session 0x55b81cef1880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175497216 unmapped: 34488320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2916218 data_alloc: 234881024 data_used: 19553995
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175497216 unmapped: 34488320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81b428800 session 0x55b81da2e000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175497216 unmapped: 34488320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81de88800 session 0x55b81dddddc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81ceb5800 session 0x55b81da38fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81de8b400 session 0x55b81a7708c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175538176 unmapped: 34447360 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f5204000/0x0/0x4ffc00000, data 0x47184ed/0x4948000, compress 0x0/0x0/0x0, omap 0x6f9ea, meta 0x6040616), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.477434158s of 11.568736076s, submitted: 57
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175865856 unmapped: 34119680 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81de8b400 session 0x55b81aa63340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176201728 unmapped: 33783808 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935080 data_alloc: 234881024 data_used: 20958923
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81ceb5800 session 0x55b81d231180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81b428800 session 0x55b81d370000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176300032 unmapped: 33685504 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81de50000 session 0x55b81cef0700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81de88800 session 0x55b81cef0e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176316416 unmapped: 33669120 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176431104 unmapped: 33554432 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81de88800 session 0x55b81ac2afc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f51fe000/0x0/0x4ffc00000, data 0x471a099/0x494c000, compress 0x0/0x0/0x0, omap 0x7000b, meta 0x603fff5), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 397 ms_handle_reset con 0x55b81b428800 session 0x55b81dddddc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176570368 unmapped: 33415168 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 397 ms_handle_reset con 0x55b81ceb5800 session 0x55b81ac05880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 397 ms_handle_reset con 0x55b81de50000 session 0x55b81b887a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176750592 unmapped: 33234944 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942481 data_alloc: 234881024 data_used: 21061323
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 398 ms_handle_reset con 0x55b81de8b400 session 0x55b81d371c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f51fe000/0x0/0x4ffc00000, data 0x471bc79/0x494e000, compress 0x0/0x0/0x0, omap 0x70461, meta 0x603fb9f), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176660480 unmapped: 33325056 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 398 ms_handle_reset con 0x55b81b428800 session 0x55b81b4f3500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 398 ms_handle_reset con 0x55b81ceb5800 session 0x55b81cef0000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176709632 unmapped: 33275904 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176709632 unmapped: 33275904 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177201152 unmapped: 32784384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177201152 unmapped: 32784384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.396218300s of 11.526197433s, submitted: 78
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951565 data_alloc: 234881024 data_used: 21058930
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f51eb000/0x0/0x4ffc00000, data 0x472b2fa/0x495f000, compress 0x0/0x0/0x0, omap 0x70b1b, meta 0x603f4e5), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f51eb000/0x0/0x4ffc00000, data 0x472b2fa/0x495f000, compress 0x0/0x0/0x0, omap 0x70b1b, meta 0x603f4e5), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 399 ms_handle_reset con 0x55b81de50000 session 0x55b81dc06c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f51eb000/0x0/0x4ffc00000, data 0x472b2fa/0x495f000, compress 0x0/0x0/0x0, omap 0x70ba1, meta 0x603f45f), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 399 ms_handle_reset con 0x55b81de88800 session 0x55b81df056c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f51ed000/0x0/0x4ffc00000, data 0x472b2fa/0x495f000, compress 0x0/0x0/0x0, omap 0x70c27, meta 0x603f3d9), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2950149 data_alloc: 234881024 data_used: 21038450
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 400 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81b86fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177373184 unmapped: 32612352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 400 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81b38ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 400 ms_handle_reset con 0x55b81b428800 session 0x55b81d715500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177373184 unmapped: 32612352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 401 ms_handle_reset con 0x55b81ceb5800 session 0x55b81ac04540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 401 ms_handle_reset con 0x55b81de50000 session 0x55b81b886e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 32571392 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 401 ms_handle_reset con 0x55b81de88800 session 0x55b81dfab880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 32571392 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 401 ms_handle_reset con 0x55b81de88800 session 0x55b81b4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f51e4000/0x0/0x4ffc00000, data 0x472fa86/0x4966000, compress 0x0/0x0/0x0, omap 0x71354, meta 0x603ecac), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 32571392 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 401 handle_osd_map epochs [401,402], i have 402, src has [1,402]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.950020790s of 10.004167557s, submitted: 51
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2957895 data_alloc: 234881024 data_used: 21038450
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177471488 unmapped: 32514048 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 32505856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 32505856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 402 ms_handle_reset con 0x55b81b428800 session 0x55b81df68700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 32505856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 402 ms_handle_reset con 0x55b81d13c800 session 0x55b81a800000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 402 ms_handle_reset con 0x55b81d242000 session 0x55b81d715dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 402 ms_handle_reset con 0x55b81ceb5800 session 0x55b81aa62700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f51e3000/0x0/0x4ffc00000, data 0x4731505/0x4969000, compress 0x0/0x0/0x0, omap 0x715e4, meta 0x603ea1c), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 32481280 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2948682 data_alloc: 234881024 data_used: 20938098
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177512448 unmapped: 32473088 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 403 ms_handle_reset con 0x55b81b428800 session 0x55b81b7aa380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f5203000/0x0/0x4ffc00000, data 0x470f06e/0x4946000, compress 0x0/0x0/0x0, omap 0x71ada, meta 0x603e526), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81d13c800 session 0x55b81ac04e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81ceb5800 session 0x55b81d094700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177537024 unmapped: 32448512 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177553408 unmapped: 32432128 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81d242000 session 0x55b81b8861c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177602560 unmapped: 32382976 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81de50000 session 0x55b81da38e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81de88800 session 0x55b81d1d8000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81b428800 session 0x55b81ac04e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175333376 unmapped: 34652160 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81d13c800 session 0x55b81df68700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f79ba000/0x0/0x4ffc00000, data 0x1f5810e/0x2192000, compress 0x0/0x0/0x0, omap 0x71c5e, meta 0x603e3a2), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2701024 data_alloc: 234881024 data_used: 11815759
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f79b5000/0x0/0x4ffc00000, data 0x1f59caa/0x2195000, compress 0x0/0x0/0x0, omap 0x72211, meta 0x603ddef), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.373512268s of 10.458980560s, submitted: 51
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81dfe2800 session 0x55b81da38540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81d242000 session 0x55b81caed880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81ceb5800 session 0x55b81b4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175333376 unmapped: 34652160 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 406 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81de1b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 406 ms_handle_reset con 0x55b81b428800 session 0x55b81b86fdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 35168256 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 407 ms_handle_reset con 0x55b81d242000 session 0x55b81b86fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 35168256 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 407 ms_handle_reset con 0x55b81de88800 session 0x55b81d370fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175874048 unmapped: 34111488 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 408 ms_handle_reset con 0x55b81b428800 session 0x55b81ac2a380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f79a3000/0x0/0x4ffc00000, data 0x2084124/0x21a7000, compress 0x0/0x0/0x0, omap 0x72e05, meta 0x603d1fb), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 409 ms_handle_reset con 0x55b81ceb5800 session 0x55b81b4f2380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 34103296 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 409 ms_handle_reset con 0x55b81d242000 session 0x55b81b38a380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 409 ms_handle_reset con 0x55b81d13c800 session 0x55b81a800000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2741744 data_alloc: 234881024 data_used: 11815971
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175808512 unmapped: 34177024 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81dfe2800 session 0x55b81da2e000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81d4f2c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81b428800 session 0x55b81d370000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dfaa700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175857664 unmapped: 34127872 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81d13c800 session 0x55b81df04380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 34103296 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 34103296 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 410 handle_osd_map epochs [410,411], i have 411, src has [1,411]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175890432 unmapped: 34095104 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 411 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81df04a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 412 ms_handle_reset con 0x55b81b428800 session 0x55b81d094540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 412 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 412 ms_handle_reset con 0x55b81d242000 session 0x55b81b8bec40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2753400 data_alloc: 234881024 data_used: 11816442
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f7994000/0x0/0x4ffc00000, data 0x208b79c/0x21b4000, compress 0x0/0x0/0x0, omap 0x73c82, meta 0x603c37e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175898624 unmapped: 34086912 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175898624 unmapped: 34086912 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.135331154s of 11.393978119s, submitted: 145
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 412 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dddda40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176095232 unmapped: 33890304 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 413 ms_handle_reset con 0x55b81d13c800 session 0x55b81ac04540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 413 ms_handle_reset con 0x55b81b428800 session 0x55b81ac2b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 33873920 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f7970000/0x0/0x4ffc00000, data 0x20b12eb/0x21da000, compress 0x0/0x0/0x0, omap 0x7424c, meta 0x603bdb4), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 413 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b443880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 33873920 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f7972000/0x0/0x4ffc00000, data 0x20b12eb/0x21da000, compress 0x0/0x0/0x0, omap 0x7452d, meta 0x603bad3), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2761308 data_alloc: 234881024 data_used: 11854248
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 33873920 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 414 ms_handle_reset con 0x55b81d242000 session 0x55b81dddc000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 33865728 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 415 ms_handle_reset con 0x55b81ceb0000 session 0x55b81aa63500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 415 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dddc700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 33693696 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 416 ms_handle_reset con 0x55b81b428800 session 0x55b81ac2ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 416 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d0941c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 33636352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 416 ms_handle_reset con 0x55b81ceb0000 session 0x55b81d714540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 33636352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2775137 data_alloc: 234881024 data_used: 11854248
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 33636352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f7963000/0x0/0x4ffc00000, data 0x20b85da/0x21e7000, compress 0x0/0x0/0x0, omap 0x753fc, meta 0x603ac04), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 417 ms_handle_reset con 0x55b81e0df400 session 0x55b81b887180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176357376 unmapped: 33628160 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.341950417s of 10.519907951s, submitted: 127
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 418 ms_handle_reset con 0x55b81ceb1800 session 0x55b81dddc540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 418 ms_handle_reset con 0x55b81b428800 session 0x55b81dddc000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 418 ms_handle_reset con 0x55b81d242000 session 0x55b81b7ab6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176365568 unmapped: 33619968 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 419 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81cef1340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 419 ms_handle_reset con 0x55b81ceb0000 session 0x55b81d094a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177455104 unmapped: 32530432 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 420 ms_handle_reset con 0x55b81e0df400 session 0x55b81b442a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177537024 unmapped: 32448512 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2803476 data_alloc: 234881024 data_used: 12704412
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177537024 unmapped: 32448512 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f789a000/0x0/0x4ffc00000, data 0x217d438/0x22ae000, compress 0x0/0x0/0x0, omap 0x75c52, meta 0x603a3ae), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 421 ms_handle_reset con 0x55b81b428800 session 0x55b81ced4380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 421 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81a771340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177602560 unmapped: 32382976 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177602560 unmapped: 32382976 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 421 handle_osd_map epochs [421,422], i have 422, src has [1,422]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 422 ms_handle_reset con 0x55b81ceb0000 session 0x55b81df68a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 31481856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176619520 unmapped: 33366016 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 ms_handle_reset con 0x55b81d242000 session 0x55b81ac2bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 ms_handle_reset con 0x55b81d42c800 session 0x55b81b442380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f77be000/0x0/0x4ffc00000, data 0x2255b1f/0x238a000, compress 0x0/0x0/0x0, omap 0x76637, meta 0x60399c9), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2819126 data_alloc: 234881024 data_used: 12719528
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176775168 unmapped: 33210368 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176775168 unmapped: 33210368 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176775168 unmapped: 33210368 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176783360 unmapped: 33202176 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.699217796s of 11.934890747s, submitted: 125
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f77be000/0x0/0x4ffc00000, data 0x2255b1f/0x238a000, compress 0x0/0x0/0x0, omap 0x766bd, meta 0x6039943), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2818702 data_alloc: 234881024 data_used: 12720141
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f77c1000/0x0/0x4ffc00000, data 0x2256b1f/0x238b000, compress 0x0/0x0/0x0, omap 0x766bd, meta 0x6039943), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 ms_handle_reset con 0x55b81b428800 session 0x55b81b887500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 ms_handle_reset con 0x55b81ceb0000 session 0x55b81d4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f77c0000/0x0/0x4ffc00000, data 0x2256b2f/0x238c000, compress 0x0/0x0/0x0, omap 0x76743, meta 0x60398bd), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 425 ms_handle_reset con 0x55b81e0e0000 session 0x55b81de1bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176816128 unmapped: 33169408 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 426 ms_handle_reset con 0x55b81d242000 session 0x55b81d370000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f77bb000/0x0/0x4ffc00000, data 0x2258703/0x238f000, compress 0x0/0x0/0x0, omap 0x76d18, meta 0x60392e8), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 426 ms_handle_reset con 0x55b81e0e7c00 session 0x55b81b38a380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 426 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dddd340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2830626 data_alloc: 234881024 data_used: 12720141
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f77bb000/0x0/0x4ffc00000, data 0x2258703/0x238f000, compress 0x0/0x0/0x0, omap 0x76d18, meta 0x60392e8), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176824320 unmapped: 33161216 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 427 ms_handle_reset con 0x55b81b428800 session 0x55b81a801dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176840704 unmapped: 33144832 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 427 ms_handle_reset con 0x55b81ceb0000 session 0x55b81da2e700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 428 ms_handle_reset con 0x55b81d242000 session 0x55b81d4f2e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 33120256 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 428 ms_handle_reset con 0x55b81e0e0000 session 0x55b81ac04540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 428 ms_handle_reset con 0x55b81e0e0000 session 0x55b81dfaa540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 32284672 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f7284000/0x0/0x4ffc00000, data 0x2781afb/0x28bb000, compress 0x0/0x0/0x0, omap 0x7768b, meta 0x6038975), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 32284672 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2870132 data_alloc: 234881024 data_used: 13142029
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 32284672 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 32284672 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 32276480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 32276480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f7284000/0x0/0x4ffc00000, data 0x2781afb/0x28bb000, compress 0x0/0x0/0x0, omap 0x7768b, meta 0x6038975), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 32276480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.560708046s of 15.713781357s, submitted: 82
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871786 data_alloc: 234881024 data_used: 13252621
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177725440 unmapped: 32260096 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81cef1880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81de2c400 session 0x55b81d095880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d29d340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81ceb0000 session 0x55b81b4f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b8bfdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f7448000/0x0/0x4ffc00000, data 0x25cb51d/0x2703000, compress 0x0/0x0/0x0, omap 0x77c9d, meta 0x6038363), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2878262 data_alloc: 234881024 data_used: 18131418
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81a770380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81de2c400 session 0x55b81b4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177684480 unmapped: 32301056 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 430 ms_handle_reset con 0x55b81e0a3400 session 0x55b81d095500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 430 ms_handle_reset con 0x55b81d6da000 session 0x55b81d715a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177684480 unmapped: 32301056 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 430 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d4f2c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177692672 unmapped: 32292864 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 430 ms_handle_reset con 0x55b81d6da000 session 0x55b81a801dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177692672 unmapped: 32292864 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.865704536s of 10.005159378s, submitted: 100
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2862510 data_alloc: 234881024 data_used: 16830840
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 431 heartbeat osd_stat(store_statfs(0x4f746e000/0x0/0x4ffc00000, data 0x248411d/0x26de000, compress 0x0/0x0/0x0, omap 0x78867, meta 0x6037799), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180035584 unmapped: 29949952 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 432 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81d1d88c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181805056 unmapped: 28180480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 432 ms_handle_reset con 0x55b81de2c400 session 0x55b81ac2ba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 432 ms_handle_reset con 0x55b81e0a3400 session 0x55b81b86fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 432 ms_handle_reset con 0x55b81e0e0000 session 0x55b81ac05500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180133888 unmapped: 29851648 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 433 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d1d8e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181329920 unmapped: 28655616 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 433 ms_handle_reset con 0x55b81d6da000 session 0x55b81b4f2000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 433 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81caec700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 29753344 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f753b000/0x0/0x4ffc00000, data 0x23ab308/0x2607000, compress 0x0/0x0/0x0, omap 0x7923e, meta 0x6036dc2), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2830035 data_alloc: 234881024 data_used: 12837224
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 433 ms_handle_reset con 0x55b81de2c400 session 0x55b81b4f3500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179757056 unmapped: 30228480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179757056 unmapped: 30228480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 434 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179773440 unmapped: 30212096 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 434 ms_handle_reset con 0x55b81d6da000 session 0x55b81dfaac40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180060160 unmapped: 29925376 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 434 handle_osd_map epochs [434,435], i have 435, src has [1,435]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 435 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81dddd880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 435 ms_handle_reset con 0x55b81de2c400 session 0x55b81aa63340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181116928 unmapped: 28868608 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f7518000/0x0/0x4ffc00000, data 0x23d3ab0/0x2632000, compress 0x0/0x0/0x0, omap 0x7a61f, meta 0x60359e1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.550426483s of 10.001684189s, submitted: 216
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 436 ms_handle_reset con 0x55b81e0e0000 session 0x55b81b8be700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 436 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81df68a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2836476 data_alloc: 234881024 data_used: 12837837
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181125120 unmapped: 28860416 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 436 ms_handle_reset con 0x55b81d6da000 session 0x55b81df68c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f7513000/0x0/0x4ffc00000, data 0x23d554b/0x2635000, compress 0x0/0x0/0x0, omap 0x7a63f, meta 0x60359c1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181125120 unmapped: 28860416 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181125120 unmapped: 28860416 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 437 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81a800000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181133312 unmapped: 28852224 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 437 ms_handle_reset con 0x55b81d242000 session 0x55b81d0941c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 438 ms_handle_reset con 0x55b81d2ad400 session 0x55b81df04a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 438 ms_handle_reset con 0x55b81dc13400 session 0x55b81aa62540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f7511000/0x0/0x4ffc00000, data 0x23d7165/0x2639000, compress 0x0/0x0/0x0, omap 0x7acb8, meta 0x6035348), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181149696 unmapped: 28835840 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2844987 data_alloc: 234881024 data_used: 12838520
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 438 ms_handle_reset con 0x55b81d242000 session 0x55b81b8876c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 438 ms_handle_reset con 0x55b81d6da000 session 0x55b81aa62700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181149696 unmapped: 28835840 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dc07340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181157888 unmapped: 28827648 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81dfebc00 session 0x55b81d29cc40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dc06c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81d242000 session 0x55b81b8861c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81d6da000 session 0x55b81caed340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f74f0000/0x0/0x4ffc00000, data 0x23ef90d/0x2654000, compress 0x0/0x0/0x0, omap 0x7b2f2, meta 0x6034d0e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81dc13400 session 0x55b81ac04a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 440 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81b4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 440 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81ac056c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181297152 unmapped: 28688384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 440 ms_handle_reset con 0x55b81d242000 session 0x55b81b887dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 440 ms_handle_reset con 0x55b81d6da000 session 0x55b81df04a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 440 handle_osd_map epochs [440,441], i have 441, src has [1,441]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 441 ms_handle_reset con 0x55b81dc13400 session 0x55b81d4f2e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 28467200 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 441 ms_handle_reset con 0x55b81d1f9000 session 0x55b81d29d340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 442 ms_handle_reset con 0x55b81dfebc00 session 0x55b81b4f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 442 ms_handle_reset con 0x55b81d242000 session 0x55b81b4f2fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 442 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dfab6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181526528 unmapped: 28459008 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 442 ms_handle_reset con 0x55b81d6da000 session 0x55b81ac2a8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.975564003s of 10.198469162s, submitted: 141
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2876734 data_alloc: 234881024 data_used: 12854806
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.7 total, 600.0 interval#012Cumulative writes: 26K writes, 94K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 26K writes, 9437 syncs, 2.83 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 38K keys, 12K commit groups, 1.0 writes per commit group, ingest: 28.12 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5299 syncs, 2.32 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 443 ms_handle_reset con 0x55b81dc13400 session 0x55b81d4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 443 ms_handle_reset con 0x55b81d6da000 session 0x55b81da2e000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 443 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b38b340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181575680 unmapped: 28409856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 443 ms_handle_reset con 0x55b81dfebc00 session 0x55b81b887180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d242000 session 0x55b81ac04380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181583872 unmapped: 28401664 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d6d9000 session 0x55b81b8bf6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81de1b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d242000 session 0x55b81ac04e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81de2c400 session 0x55b81d715c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181592064 unmapped: 28393472 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d6da000 session 0x55b81d1d8fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f74a3000/0x0/0x4ffc00000, data 0x2439477/0x26a2000, compress 0x0/0x0/0x0, omap 0x7e161, meta 0x6031e9f), peers [0,2] op hist [1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81dfebc00 session 0x55b81b86fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181616640 unmapped: 28368896 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d6da000 session 0x55b81a801c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d242000 session 0x55b81cef0e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81de2c400 session 0x55b81dfaac40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 445 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b887880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181624832 unmapped: 28360704 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2883650 data_alloc: 234881024 data_used: 12851295
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 446 ms_handle_reset con 0x55b81e0e7400 session 0x55b81cef1880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 446 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181690368 unmapped: 28295168 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81d242000 session 0x55b81caed6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81d6da000 session 0x55b81d715a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181624832 unmapped: 28360704 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81de2c400 session 0x55b81b86e700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81d0ad000 session 0x55b81df62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f749a000/0x0/0x4ffc00000, data 0x24447dd/0x26b0000, compress 0x0/0x0/0x0, omap 0x7f2af, meta 0x6030d51), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b443dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181633024 unmapped: 28352512 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 27795456 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f749a000/0x0/0x4ffc00000, data 0x24447dd/0x26b0000, compress 0x0/0x0/0x0, omap 0x7f863, meta 0x603079d), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181575680 unmapped: 28409856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.876866341s of 10.194605827s, submitted: 192
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2898547 data_alloc: 234881024 data_used: 12905741
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81ac2a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181575680 unmapped: 28409856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181575680 unmapped: 28409856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81d6da000 session 0x55b81b4f2380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81d242000 session 0x55b81ced4380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181288960 unmapped: 28696576 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181297152 unmapped: 28688384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f7405000/0x0/0x4ffc00000, data 0x24d629c/0x2743000, compress 0x0/0x0/0x0, omap 0x7fb8b, meta 0x6030475), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181297152 unmapped: 28688384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2898123 data_alloc: 234881024 data_used: 12905839
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81d2ac000 session 0x55b81d714700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81de2c400 session 0x55b81b7aa380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f7403000/0x0/0x4ffc00000, data 0x24dc29c/0x2749000, compress 0x0/0x0/0x0, omap 0x7fc13, meta 0x60303ed), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2898395 data_alloc: 234881024 data_used: 13057391
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.211813927s of 11.236264229s, submitted: 16
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81de1bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81a800c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181321728 unmapped: 28663808 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81d6da000 session 0x55b81caed6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181329920 unmapped: 28655616 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f7401000/0x0/0x4ffc00000, data 0x24dc30e/0x274b000, compress 0x0/0x0/0x0, omap 0x80139, meta 0x602fec7), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 449 ms_handle_reset con 0x55b81d34fc00 session 0x55b81cef1880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 449 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d1d88c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181346304 unmapped: 28639232 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 449 handle_osd_map epochs [449,450], i have 449, src has [1,450]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d1f8000 session 0x55b81d4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d242000 session 0x55b81ac04540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d6da000 session 0x55b81d4f21c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181354496 unmapped: 28631040 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81de2c400 session 0x55b81d137500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2913848 data_alloc: 234881024 data_used: 13057407
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181354496 unmapped: 28631040 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b38a380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d242000 session 0x55b81b86f880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 186122240 unmapped: 23863296 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81caec700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81de2c400 session 0x55b81cb93c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 186785792 unmapped: 35807232 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81dc12000 session 0x55b81df05340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 35569664 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 heartbeat osd_stat(store_statfs(0x4eef69000/0x0/0x4ffc00000, data 0xa96fb0a/0xabe3000, compress 0x0/0x0/0x0, omap 0x80775, meta 0x602f88b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b442540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182943744 unmapped: 39649280 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81a770000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d242000 session 0x55b81da2f6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4076801 data_alloc: 234881024 data_used: 13106543
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 187187200 unmapped: 35405824 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.964659691s of 10.027614594s, submitted: 153
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81de2c400 session 0x55b81d370000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d6da000 session 0x55b81de1a1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d4000 session 0x55b81d4f2e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d1f8000 session 0x55b81dddc540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 39264256 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b8b7800 session 0x55b81dfaa000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81de1b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183353344 unmapped: 39239680 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d242000 session 0x55b81d094540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 heartbeat osd_stat(store_statfs(0x4e43f4000/0x0/0x4ffc00000, data 0x154e69d4/0x15756000, compress 0x0/0x0/0x0, omap 0x81399, meta 0x602ec67), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81df62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81b7d4000 session 0x55b81d1d8fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81caec380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39206912 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81b8b7800 session 0x55b81b7ab180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81de2c400 session 0x55b81d1d8e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81d1f8000 session 0x55b81df04a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81b7d4000 session 0x55b81b4f2380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183410688 unmapped: 39182336 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 452 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81aa71180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548322 data_alloc: 234881024 data_used: 13106445
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 39174144 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 452 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81d1d8fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 453 ms_handle_reset con 0x55b81b8b7800 session 0x55b81b4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 453 heartbeat osd_stat(store_statfs(0x4e43eb000/0x0/0x4ffc00000, data 0x154631d0/0x156d5000, compress 0x0/0x0/0x0, omap 0x82195, meta 0x602de6b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183214080 unmapped: 39378944 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 453 ms_handle_reset con 0x55b81b7d4000 session 0x55b81df05340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 454 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81b38a380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 454 ms_handle_reset con 0x55b81dfe3800 session 0x55b81d29cfc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 39346176 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81e0de800 session 0x55b81b86fdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81d1f8000 session 0x55b81dddddc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d4f3880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39321600 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81b7d4000 session 0x55b81de1a1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81b887340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183304192 unmapped: 39288832 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 456 ms_handle_reset con 0x55b81e0de800 session 0x55b81a771500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4554939 data_alloc: 234881024 data_used: 12857119
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 456 ms_handle_reset con 0x55b81b428800 session 0x55b81df62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 457 ms_handle_reset con 0x55b81b7d4000 session 0x55b81b38b340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.304566383s of 10.939286232s, submitted: 261
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 458 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81a800a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 458 heartbeat osd_stat(store_statfs(0x4e500b000/0x0/0x4ffc00000, data 0x148bfe3d/0x14b3d000, compress 0x0/0x0/0x0, omap 0x83cda, meta 0x602c326), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 458 ms_handle_reset con 0x55b81dfe3800 session 0x55b81ac04540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 458 ms_handle_reset con 0x55b81de4e400 session 0x55b81ac048c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4491741 data_alloc: 218103808 data_used: 6613794
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 459 heartbeat osd_stat(store_statfs(0x4e5009000/0x0/0x4ffc00000, data 0x148c1a67/0x14b41000, compress 0x0/0x0/0x0, omap 0x83ddd, meta 0x602c223), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183648256 unmapped: 59949056 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 459 heartbeat osd_stat(store_statfs(0x4e0006000/0x0/0x4ffc00000, data 0x198c3502/0x19b44000, compress 0x0/0x0/0x0, omap 0x83f68, meta 0x602c098), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 51077120 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 184451072 unmapped: 59146240 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 184918016 unmapped: 58679296 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 189423616 unmapped: 54173696 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 459 handle_osd_map epochs [459,460], i have 460, src has [1,460]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6115149 data_alloc: 218103808 data_used: 6614379
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 460 heartbeat osd_stat(store_statfs(0x4d2c08000/0x0/0x4ffc00000, data 0x26cc3502/0x26f44000, compress 0x0/0x0/0x0, omap 0x83f68, meta 0x602c098), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 194002944 unmapped: 49594368 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182689792 unmapped: 60907520 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 460 ms_handle_reset con 0x55b81de4e400 session 0x55b81ac4efc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 460 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81da38000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.793249130s of 10.365959167s, submitted: 120
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 460 ms_handle_reset con 0x55b81b428800 session 0x55b81b887500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182689792 unmapped: 60907520 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182722560 unmapped: 60874752 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 461 heartbeat osd_stat(store_statfs(0x4ce002000/0x0/0x4ffc00000, data 0x2b8c6aad/0x2bb48000, compress 0x0/0x0/0x0, omap 0x84731, meta 0x602b8cf), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182779904 unmapped: 60817408 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 462 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81ac04fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 462 heartbeat osd_stat(store_statfs(0x4ce000000/0x0/0x4ffc00000, data 0x2b8c868e/0x2bb4a000, compress 0x0/0x0/0x0, omap 0x84835, meta 0x602b7cb), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6445655 data_alloc: 218103808 data_used: 6614636
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182779904 unmapped: 60817408 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182779904 unmapped: 60817408 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 462 ms_handle_reset con 0x55b81b7d4000 session 0x55b81ac2b880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182788096 unmapped: 60809216 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182788096 unmapped: 60809216 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 462 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b7aa1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 462 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81ac2a700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182755328 unmapped: 60841984 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 463 ms_handle_reset con 0x55b81b428800 session 0x55b81dddd180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 463 handle_osd_map epochs [463,464], i have 463, src has [1,464]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6455901 data_alloc: 218103808 data_used: 6877472
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 60792832 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 heartbeat osd_stat(store_statfs(0x4ce001000/0x0/0x4ffc00000, data 0x2b8c86f0/0x2bb4b000, compress 0x0/0x0/0x0, omap 0x84bf8, meta 0x602b408), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81e0de000 session 0x55b81b443180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b429c00 session 0x55b81b442380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81de4fc00 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 heartbeat osd_stat(store_statfs(0x4cdff5000/0x0/0x4ffc00000, data 0x2b8cbd99/0x2bb53000, compress 0x0/0x0/0x0, omap 0x8550c, meta 0x602aaf4), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182820864 unmapped: 60776448 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b428800 session 0x55b81b8876c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d095500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182853632 unmapped: 60743680 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81b887340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.774193764s of 10.909391403s, submitted: 74
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81e0de000 session 0x55b81b86fdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d5400 session 0x55b81df68a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81aa62700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 184680448 unmapped: 58916864 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dddcfc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81e0de000 session 0x55b81ac2bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81de4fc00 session 0x55b81d1d8380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 465 heartbeat osd_stat(store_statfs(0x4cd243000/0x0/0x4ffc00000, data 0x2c680dc1/0x2c909000, compress 0x0/0x0/0x0, omap 0x85bee, meta 0x602a412), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81b428800 session 0x55b81b442380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b38a000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 62537728 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81cef1880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6538854 data_alloc: 218103808 data_used: 6877393
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 62537728 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81e0de000 session 0x55b81d095880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181075968 unmapped: 62521344 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 466 heartbeat osd_stat(store_statfs(0x4cd241000/0x0/0x4ffc00000, data 0x2c682936/0x2c90b000, compress 0x0/0x0/0x0, omap 0x865a7, meta 0x6029a59), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 466 ms_handle_reset con 0x55b81de2c000 session 0x55b81b4f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180740096 unmapped: 62857216 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 467 heartbeat osd_stat(store_statfs(0x4cce67000/0x0/0x4ffc00000, data 0x2ca594ee/0x2cce3000, compress 0x0/0x0/0x0, omap 0x866ab, meta 0x6029955), peers [0,2] op hist [0,0,0,0,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 467 ms_handle_reset con 0x55b81e0e6400 session 0x55b81d137500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 63381504 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 467 ms_handle_reset con 0x55b81b7d5400 session 0x55b81cef1180
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 63373312 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6578985 data_alloc: 218103808 data_used: 6877408
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180273152 unmapped: 63324160 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 heartbeat osd_stat(store_statfs(0x4ccde0000/0x0/0x4ffc00000, data 0x2cadfb25/0x2cd6c000, compress 0x0/0x0/0x0, omap 0x86aff, meta 0x6029501), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6580879 data_alloc: 218103808 data_used: 6877993
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b428800 session 0x55b81d094700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81aa62380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81d137c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b428800 session 0x55b81d29cc40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.697557449s of 15.368885994s, submitted: 219
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181018624 unmapped: 62578688 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b8861c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d5400 session 0x55b81b7ab6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81e0e6400 session 0x55b81a771c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81e0de000 session 0x55b81d714540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b428800 session 0x55b81b442a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 heartbeat osd_stat(store_statfs(0x4ccddf000/0x0/0x4ffc00000, data 0x2cadfb35/0x2cd6d000, compress 0x0/0x0/0x0, omap 0x86aff, meta 0x6029501), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6623407 data_alloc: 218103808 data_used: 6877993
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81da38c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 heartbeat osd_stat(store_statfs(0x4cb65c000/0x0/0x4ffc00000, data 0x2d0c2b35/0x2d350000, compress 0x0/0x0/0x0, omap 0x86aff, meta 0x71c9501), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180912128 unmapped: 62685184 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6629206 data_alloc: 218103808 data_used: 6880569
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180854784 unmapped: 62742528 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d5400 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180871168 unmapped: 62726144 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 heartbeat osd_stat(store_statfs(0x4cb631000/0x0/0x4ffc00000, data 0x2d0ecb45/0x2d37b000, compress 0x0/0x0/0x0, omap 0x86dc7, meta 0x71c9239), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180871168 unmapped: 62726144 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180871168 unmapped: 62726144 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.936639786s of 10.162817955s, submitted: 23
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81d1f9400 session 0x55b81aa63500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6664426 data_alloc: 234881024 data_used: 11914057
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81de8ac00 session 0x55b81df68000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4cb62c000/0x0/0x4ffc00000, data 0x2d0ee6e1/0x2d37e000, compress 0x0/0x0/0x0, omap 0x87283, meta 0x71c8d7d), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6664449 data_alloc: 234881024 data_used: 11914057
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180903936 unmapped: 62693376 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 189251584 unmapped: 54345728 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f3880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b4f3c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d136700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81d1f9400 session 0x55b81d4f21c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81de4f800 session 0x55b81df68700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f2e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81caec700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d5400 session 0x55b81dfaa000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81d1f9400 session 0x55b81df04a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190251008 unmapped: 53346304 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca675000/0x0/0x4ffc00000, data 0x2e681753/0x2e32f000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca675000/0x0/0x4ffc00000, data 0x2e681753/0x2e32f000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190341120 unmapped: 53256192 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190341120 unmapped: 53256192 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca675000/0x0/0x4ffc00000, data 0x2e681753/0x2e32f000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6845398 data_alloc: 234881024 data_used: 13646169
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190341120 unmapped: 53256192 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca675000/0x0/0x4ffc00000, data 0x2e681753/0x2e32f000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81dfebc00 session 0x55b81df05a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190349312 unmapped: 53248000 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b428800 session 0x55b81d4a2700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190349312 unmapped: 53248000 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.703333855s of 14.257908821s, submitted: 191
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b4f2fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d5400 session 0x55b81a770380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190701568 unmapped: 52895744 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190554112 unmapped: 53043200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6845786 data_alloc: 234881024 data_used: 13708633
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca650000/0x0/0x4ffc00000, data 0x2e6ad763/0x2e35c000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6855558 data_alloc: 234881024 data_used: 15322457
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca650000/0x0/0x4ffc00000, data 0x2e6ad763/0x2e35c000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 52912128 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca64f000/0x0/0x4ffc00000, data 0x2e6ae763/0x2e35d000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 52912128 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 52912128 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca64f000/0x0/0x4ffc00000, data 0x2e6ae763/0x2e35d000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 52912128 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.987191200s of 11.000589371s, submitted: 5
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 49258496 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6914262 data_alloc: 234881024 data_used: 15728985
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 49250304 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193937408 unmapped: 49659904 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193937408 unmapped: 49659904 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4c9e36000/0x0/0x4ffc00000, data 0x2eeae763/0x2eb5d000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193945600 unmapped: 49651712 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81e0e6400 session 0x55b81ced5500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81de2cc00 session 0x55b81df04700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81d34f000 session 0x55b81d370fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4c9e36000/0x0/0x4ffc00000, data 0x2eeae763/0x2eb5d000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 51970048 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b428800 session 0x55b81dc07340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6836900 data_alloc: 234881024 data_used: 10746185
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191635456 unmapped: 51961856 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191635456 unmapped: 51961856 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca45d000/0x0/0x4ffc00000, data 0x2e8a1753/0x2e54f000, compress 0x0/0x0/0x0, omap 0x8825e, meta 0x71c7da2), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191635456 unmapped: 51961856 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d5400 session 0x55b81b442540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 470 ms_handle_reset con 0x55b81de2cc00 session 0x55b81b7aa1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193019904 unmapped: 50577408 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 470 heartbeat osd_stat(store_statfs(0x4c9ddb000/0x0/0x4ffc00000, data 0x2f376351/0x2ebcf000, compress 0x0/0x0/0x0, omap 0x88869, meta 0x71c7797), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.350826263s of 10.048379898s, submitted: 197
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 471 ms_handle_reset con 0x55b81e0e6400 session 0x55b81b887a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 471 ms_handle_reset con 0x55b81e0e6400 session 0x55b81de1b500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 471 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b4f3500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193101824 unmapped: 50495488 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6936130 data_alloc: 234881024 data_used: 10742105
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192864256 unmapped: 50733056 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 472 ms_handle_reset con 0x55b81b428800 session 0x55b81a800c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 472 ms_handle_reset con 0x55b81d34f000 session 0x55b81b4f2a80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192897024 unmapped: 50700288 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192905216 unmapped: 50692096 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 473 heartbeat osd_stat(store_statfs(0x4c9dd5000/0x0/0x4ffc00000, data 0x2f379add/0x2ebd5000, compress 0x0/0x0/0x0, omap 0x88d2f, meta 0x71c72d1), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192946176 unmapped: 50651136 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81de2cc00 session 0x55b81caec8c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d095dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192954368 unmapped: 50642944 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6832290 data_alloc: 234881024 data_used: 10742089
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192954368 unmapped: 50642944 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81d1f9400 session 0x55b81dddc700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81dfebc00 session 0x55b81caed340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81b428800 session 0x55b81dddc1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d393a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190947328 unmapped: 52649984 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81b428800 session 0x55b81d714700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190947328 unmapped: 52649984 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 52641792 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 heartbeat osd_stat(store_statfs(0x4cb232000/0x0/0x4ffc00000, data 0x2da7a255/0x2d72c000, compress 0x0/0x0/0x0, omap 0x89d98, meta 0x71c6268), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81d1f9400 session 0x55b81d136000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81b7d5400 session 0x55b81aa71340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 52641792 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 heartbeat osd_stat(store_statfs(0x4cb232000/0x0/0x4ffc00000, data 0x2da7a255/0x2d72c000, compress 0x0/0x0/0x0, omap 0x89e20, meta 0x71c61e0), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.172925949s of 11.021576881s, submitted: 110
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81e0e6400 session 0x55b81dfab500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6758028 data_alloc: 218103808 data_used: 8437975
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 52641792 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 heartbeat osd_stat(store_statfs(0x4cb27f000/0x0/0x4ffc00000, data 0x2da7a265/0x2d72d000, compress 0x0/0x0/0x0, omap 0x8a084, meta 0x71c5f7c), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 228777984 unmapped: 27418624 heap: 256196608 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 220471296 unmapped: 39927808 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 195354624 unmapped: 65044480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81e0e3c00 session 0x55b81d393c00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 heartbeat osd_stat(store_statfs(0x4c7a7f000/0x0/0x4ffc00000, data 0x3127a265/0x30f2d000, compress 0x0/0x0/0x0, omap 0x8a3f8, meta 0x71c5c08), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bfdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d4f36c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 199811072 unmapped: 60588032 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7472125 data_alloc: 218103808 data_used: 7979792
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204398592 unmapped: 56000512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191979520 unmapped: 68419584 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200835072 unmapped: 59564032 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 198017024 unmapped: 62382080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 heartbeat osd_stat(store_statfs(0x4b7a7c000/0x0/0x4ffc00000, data 0x4127bce4/0x40f30000, compress 0x0/0x0/0x0, omap 0x8a691, meta 0x71c596f), peers [0,2] op hist [0,0,0,0,0,0,1,1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202620928 unmapped: 57778176 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.388038635s of 10.005324364s, submitted: 118
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8614517 data_alloc: 218103808 data_used: 7980064
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 56393728 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81d34f000 session 0x55b81ac4e700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81dfebc00 session 0x55b81df636c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81d1f9400 session 0x55b81df04700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81b428800 session 0x55b81dddd340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 195633152 unmapped: 64765952 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81b7d5400 session 0x55b81df62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81d34f000 session 0x55b81da39340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 62881792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81dfebc00 session 0x55b81b38bc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 heartbeat osd_stat(store_statfs(0x4cb27d000/0x0/0x4ffc00000, data 0x2da7bcd4/0x2d72f000, compress 0x0/0x0/0x0, omap 0x8a97d, meta 0x71c5683), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81e0e6400 session 0x55b81dddc380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196599808 unmapped: 63799296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 476 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaba40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 476 ms_handle_reset con 0x55b81b7d5400 session 0x55b81a770fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196599808 unmapped: 63799296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 476 ms_handle_reset con 0x55b81d34f000 session 0x55b81d4f3500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6869122 data_alloc: 218103808 data_used: 7980064
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196599808 unmapped: 63799296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196599808 unmapped: 63799296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196624384 unmapped: 63774720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 478 ms_handle_reset con 0x55b81dfebc00 session 0x55b81d1d8fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 478 ms_handle_reset con 0x55b81cf1b000 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 478 heartbeat osd_stat(store_statfs(0x4cb27e000/0x0/0x4ffc00000, data 0x2d494400/0x2d72c000, compress 0x0/0x0/0x0, omap 0x8b497, meta 0x71c4b69), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196640768 unmapped: 63758336 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 478 heartbeat osd_stat(store_statfs(0x4cca95000/0x0/0x4ffc00000, data 0x2b8e3f2d/0x2bb7a000, compress 0x0/0x0/0x0, omap 0x8b621, meta 0x71c49df), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196640768 unmapped: 63758336 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.156767845s of 10.049361229s, submitted: 202
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6668998 data_alloc: 218103808 data_used: 6428079
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196935680 unmapped: 63463424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 480 ms_handle_reset con 0x55b81b428800 session 0x55b81dddddc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 480 ms_handle_reset con 0x55b81d34f000 session 0x55b81b8bf6c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 480 ms_handle_reset con 0x55b81b7d5400 session 0x55b81df69500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 197427200 unmapped: 62971904 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 481 ms_handle_reset con 0x55b81dfebc00 session 0x55b81aa63340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 481 ms_handle_reset con 0x55b81e0a3000 session 0x55b81df68fc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266773 data_alloc: 218103808 data_used: 6428079
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f6de3000/0x0/0x4ffc00000, data 0x192935d/0x1bc3000, compress 0x0/0x0/0x0, omap 0x8be45, meta 0x71c41bb), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f6de4000/0x0/0x4ffc00000, data 0x192ae48/0x1bc6000, compress 0x0/0x0/0x0, omap 0x8c5e3, meta 0x71c3a1d), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 483 ms_handle_reset con 0x55b81b428800 session 0x55b81df69340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273005 data_alloc: 218103808 data_used: 6428079
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765593529s of 11.252699852s, submitted: 254
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 483 handle_osd_map epochs [483,484], i have 483, src has [1,484]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81b7d5400 session 0x55b81ac4ec40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81d34f000 session 0x55b81d136700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f6ddd000/0x0/0x4ffc00000, data 0x192e557/0x1bcf000, compress 0x0/0x0/0x0, omap 0x8c97f, meta 0x71c3681), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191070208 unmapped: 69328896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191070208 unmapped: 69328896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81dfebc00 session 0x55b81ac04540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81ceb1000 session 0x55b81a800380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81ceb3800 session 0x55b81b8bf880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 194576384 unmapped: 65822720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81b428800 session 0x55b81caec1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81b7d5400 session 0x55b81dddc700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458333 data_alloc: 218103808 data_used: 6428079
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 485 ms_handle_reset con 0x55b81d34f000 session 0x55b81b7abdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 485 ms_handle_reset con 0x55b81dfebc00 session 0x55b81ac4ea80
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 485 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaa700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f3a7d000/0x0/0x4ffc00000, data 0x3ae91b7/0x3d8d000, compress 0x0/0x0/0x0, omap 0x8cf92, meta 0x836306e), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 485 ms_handle_reset con 0x55b81b7d5400 session 0x55b81ac2bdc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 486 ms_handle_reset con 0x55b81ceb3800 session 0x55b81b8868c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 486 ms_handle_reset con 0x55b81d34f000 session 0x55b81dfaa1c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f3a7b000/0x0/0x4ffc00000, data 0x3aead45/0x3d8f000, compress 0x0/0x0/0x0, omap 0x8d11b, meta 0x8362ee5), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464553 data_alloc: 218103808 data_used: 6428664
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 486 ms_handle_reset con 0x55b81dfe9400 session 0x55b81da2f880
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.714389801s of 10.164520264s, submitted: 90
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192503808 unmapped: 67895296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 487 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaa700
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 487 ms_handle_reset con 0x55b81b7d5400 session 0x55b81aa63340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 487 ms_handle_reset con 0x55b81ceb3800 session 0x55b81ac4fc00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468255 data_alloc: 218103808 data_used: 6428664
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f3a78000/0x0/0x4ffc00000, data 0x3aee2fc/0x3d92000, compress 0x0/0x0/0x0, omap 0x8d985, meta 0x836267b), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 488 ms_handle_reset con 0x55b81d2ad800 session 0x55b81d1d8e00
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 488 handle_osd_map epochs [488,489], i have 488, src has [1,489]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192733184 unmapped: 67665920 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 489 ms_handle_reset con 0x55b81dfe2c00 session 0x55b81dc06c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507105 data_alloc: 234881024 data_used: 11512925
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f3a74000/0x0/0x4ffc00000, data 0x3aefd8b/0x3d96000, compress 0x0/0x0/0x0, omap 0x8dde9, meta 0x8362217), peers [0,2] op hist [1])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 490 ms_handle_reset con 0x55b81b428800 session 0x55b81df048c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192724992 unmapped: 67674112 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.309647560s of 10.379746437s, submitted: 58
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 491 ms_handle_reset con 0x55b81de52000 session 0x55b81a771500
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192733184 unmapped: 67665920 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 491 heartbeat osd_stat(store_statfs(0x4f3a6c000/0x0/0x4ffc00000, data 0x3af34c3/0x3d9c000, compress 0x0/0x0/0x0, omap 0x8e610, meta 0x83619f0), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 491 handle_osd_map epochs [491,492], i have 492, src has [1,492]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192733184 unmapped: 67665920 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192782336 unmapped: 67616768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 493 ms_handle_reset con 0x55b81b7d5400 session 0x55b81da2e000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192798720 unmapped: 67600384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 493 ms_handle_reset con 0x55b81ceb3800 session 0x55b81df62c40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517610 data_alloc: 234881024 data_used: 11513197
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192798720 unmapped: 67600384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192798720 unmapped: 67600384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3a69000/0x0/0x4ffc00000, data 0x3af6caf/0x3da1000, compress 0x0/0x0/0x0, omap 0x8ec7c, meta 0x8361384), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 198950912 unmapped: 61448192 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3a69000/0x0/0x4ffc00000, data 0x3af6caf/0x3da1000, compress 0x0/0x0/0x0, omap 0x8ec7c, meta 0x8361384), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201719808 unmapped: 58679296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 493 ms_handle_reset con 0x55b81d2ad800 session 0x55b81da2fa40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594014 data_alloc: 234881024 data_used: 12892626
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201719808 unmapped: 58679296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201719808 unmapped: 58679296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f2f18000/0x0/0x4ffc00000, data 0x4647d21/0x48f4000, compress 0x0/0x0/0x0, omap 0x8ef68, meta 0x8361098), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201719808 unmapped: 58679296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.977608681s of 11.364136696s, submitted: 187
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 493 ms_handle_reset con 0x55b81b428800 session 0x55b81d370000
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 494 ms_handle_reset con 0x55b81b7d5400 session 0x55b81b4436c0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200949760 unmapped: 59449344 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 495 ms_handle_reset con 0x55b81ceb3800 session 0x55b81ac05dc0
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 495 ms_handle_reset con 0x55b81d2ad800 session 0x55b81d4f3340
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200957952 unmapped: 59441152 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600822 data_alloc: 234881024 data_used: 12896836
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200957952 unmapped: 59441152 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 495 ms_handle_reset con 0x55b81de52000 session 0x55b81b4f3a40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 495 handle_osd_map epochs [496,496], i have 496, src has [1,496]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 496 handle_osd_map epochs [496,497], i have 496, src has [1,497]
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81de52000 session 0x55b81d714380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81b428800 session 0x55b81a770380
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2f0a000/0x0/0x4ffc00000, data 0x464ec8b/0x48fe000, compress 0x0/0x0/0x0, omap 0x8ff35, meta 0x83600cb), peers [0,2] op hist [])
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81d34f000 session 0x55b81d29cc40
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d29c540
Jan 31 00:04:56 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:13:31 np0005603435 nova_compute[239938]: 2026-01-31 05:13:31.818 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:32 np0005603435 nova_compute[239938]: 2026-01-31 05:13:32.045 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:32 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:32 np0005603435 rsyslogd[1007]: imjournal: 15069 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 31 00:13:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 00:13:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/776987595' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 00:13:33 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 00:13:33 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/776987595' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 00:13:34 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:13:34 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:36 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:36 np0005603435 nova_compute[239938]: 2026-01-31 05:13:36.822 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:36 np0005603435 nova_compute[239938]: 2026-01-31 05:13:36.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:36 np0005603435 nova_compute[239938]: 2026-01-31 05:13:36.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:13:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:13:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:13:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:13:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:13:36 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:13:37 np0005603435 nova_compute[239938]: 2026-01-31 05:13:37.048 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:38 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:38 np0005603435 nova_compute[239938]: 2026-01-31 05:13:38.883 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:39 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:13:40 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:40 np0005603435 nova_compute[239938]: 2026-01-31 05:13:40.886 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:40 np0005603435 nova_compute[239938]: 2026-01-31 05:13:40.887 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:40 np0005603435 nova_compute[239938]: 2026-01-31 05:13:40.914 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:13:40 np0005603435 nova_compute[239938]: 2026-01-31 05:13:40.914 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:13:40 np0005603435 nova_compute[239938]: 2026-01-31 05:13:40.914 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:13:40 np0005603435 nova_compute[239938]: 2026-01-31 05:13:40.914 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 00:13:40 np0005603435 nova_compute[239938]: 2026-01-31 05:13:40.915 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:13:41 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:13:41 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840049803' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.459 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.627 239942 WARNING nova.virt.libvirt.driver [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.628 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4235MB free_disk=59.98775292560458GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.629 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.629 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.717 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.717 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.789 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 00:13:41 np0005603435 nova_compute[239938]: 2026-01-31 05:13:41.826 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:42 np0005603435 nova_compute[239938]: 2026-01-31 05:13:42.049 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:42 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:42 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 00:13:42 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416133515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 00:13:42 np0005603435 nova_compute[239938]: 2026-01-31 05:13:42.317 239942 DEBUG oslo_concurrency.processutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 00:13:42 np0005603435 nova_compute[239938]: 2026-01-31 05:13:42.325 239942 DEBUG nova.compute.provider_tree [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d0a6937-09c9-4e01-94bd-2812940db2bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 00:13:42 np0005603435 nova_compute[239938]: 2026-01-31 05:13:42.353 239942 DEBUG nova.scheduler.client.report [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Inventory has not changed for provider 4d0a6937-09c9-4e01-94bd-2812940db2bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 00:13:42 np0005603435 nova_compute[239938]: 2026-01-31 05:13:42.356 239942 DEBUG nova.compute.resource_tracker [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 00:13:42 np0005603435 nova_compute[239938]: 2026-01-31 05:13:42.356 239942 DEBUG oslo_concurrency.lockutils [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:13:44 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:13:44 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:45 np0005603435 nova_compute[239938]: 2026-01-31 05:13:45.357 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:45 np0005603435 nova_compute[239938]: 2026-01-31 05:13:45.357 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 00:13:45 np0005603435 nova_compute[239938]: 2026-01-31 05:13:45.357 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 00:13:45 np0005603435 nova_compute[239938]: 2026-01-31 05:13:45.378 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 00:13:45 np0005603435 nova_compute[239938]: 2026-01-31 05:13:45.380 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:45 np0005603435 nova_compute[239938]: 2026-01-31 05:13:45.380 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:45 np0005603435 nova_compute[239938]: 2026-01-31 05:13:45.380 239942 DEBUG nova.compute.manager [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 00:13:46 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:46 np0005603435 nova_compute[239938]: 2026-01-31 05:13:46.831 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:46 np0005603435 nova_compute[239938]: 2026-01-31 05:13:46.888 239942 DEBUG oslo_service.periodic_task [None req-da568c84-b201-4abf-8a0a-d422b6652378 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 00:13:47 np0005603435 nova_compute[239938]: 2026-01-31 05:13:47.052 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:48 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:49 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:13:50 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:51 np0005603435 nova_compute[239938]: 2026-01-31 05:13:51.842 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:52 np0005603435 nova_compute[239938]: 2026-01-31 05:13:52.052 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:52 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:54 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:13:54 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:13:55.938 156017 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 00:13:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:13:55.938 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 00:13:55 np0005603435 ovn_metadata_agent[155995]: 2026-01-31 05:13:55.939 156017 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 00:13:56 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:56 np0005603435 nova_compute[239938]: 2026-01-31 05:13:56.846 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:57 np0005603435 nova_compute[239938]: 2026-01-31 05:13:57.055 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:13:58 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:13:59 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:14:00 np0005603435 podman[289615]: 2026-01-31 05:14:00.109495384 +0000 UTC m=+0.067980575 container health_status 7440c7c67f8fa21fb1b272153d90267dd5df4e9c83b4b4729599c42f5a1381a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 00:14:00 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:00 np0005603435 podman[289616]: 2026-01-31 05:14:00.176107094 +0000 UTC m=+0.135064126 container health_status f06964cb4b6b3d920d9b9c4be01593a86b470534bbda0ed908364efcd2af4e4d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'd5bc271fd23826643cfbe8eae57821a00d775e47a63fb5b688a5916d7a8ec52d-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac-a84fc8cade432a3edefafed3e069909e98dbb851a5b3cdbb445f0d6fe0cb0bac'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 00:14:01 np0005603435 nova_compute[239938]: 2026-01-31 05:14:01.903 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:14:02 np0005603435 nova_compute[239938]: 2026-01-31 05:14:02.057 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:14:02 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:03 np0005603435 systemd-logind[816]: New session 55 of user zuul.
Jan 31 00:14:03 np0005603435 systemd[1]: Started Session 55 of User zuul.
Jan 31 00:14:04 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:14:04 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:05 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19422 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19424 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Optimize plan auto_2026-01-31_05:14:06
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] do_upmap
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr']
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 00:14:06 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 00:14:06 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014572527' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 00:14:06 np0005603435 nova_compute[239938]: 2026-01-31 05:14:06.907 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 00:14:06 np0005603435 ceph-mgr[75599]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 00:14:07 np0005603435 nova_compute[239938]: 2026-01-31 05:14:07.058 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 00:14:08 np0005603435 ceph-mgr[75599]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 00:14:09 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:14:09 np0005603435 ovs-vsctl[289948]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 00:14:09 np0005603435 virtqemud[240256]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 00:14:10 np0005603435 virtqemud[240256]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 00:14:10 np0005603435 virtqemud[240256]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 00:14:10 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:10 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: cache status {prefix=cache status} (starting...)
Jan 31 00:14:10 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: client ls {prefix=client ls} (starting...)
Jan 31 00:14:10 np0005603435 lvm[290275]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 00:14:10 np0005603435 lvm[290275]: VG ceph_vg0 finished
Jan 31 00:14:10 np0005603435 lvm[290285]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 00:14:10 np0005603435 lvm[290285]: VG ceph_vg1 finished
Jan 31 00:14:10 np0005603435 lvm[290301]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 00:14:10 np0005603435 lvm[290301]: VG ceph_vg2 finished
Jan 31 00:14:11 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19428 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:11 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 00:14:11 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 00:14:11 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 00:14:11 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19430 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:11 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 00:14:11 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 00:14:11 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 00:14:11 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 31 00:14:11 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4126365617' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 00:14:11 np0005603435 nova_compute[239938]: 2026-01-31 05:14:11.942 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:14:11 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19434 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:12 np0005603435 nova_compute[239938]: 2026-01-31 05:14:12.059 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:14:12 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 00:14:12 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:12 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 00:14:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 00:14:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264336635' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 00:14:12 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19438 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:12 np0005603435 ceph-mgr[75599]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 00:14:12 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: 2026-01-31T05:14:12.436+0000 7f77961f6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 00:14:12 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: ops {prefix=ops} (starting...)
Jan 31 00:14:12 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 31 00:14:12 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589406763' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 00:14:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 31 00:14:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253796997' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 00:14:13 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: session ls {prefix=session ls} (starting...)
Jan 31 00:14:13 np0005603435 ceph-mds[95922]: mds.cephfs.compute-0.xaqauc asok_command: status {prefix=status} (starting...)
Jan 31 00:14:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 31 00:14:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236270801' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 00:14:13 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 00:14:13 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398159597' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 00:14:13 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19448 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:14:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 00:14:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502890213' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 00:14:14 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:14 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19452 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 00:14:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918172518' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 00:14:14 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 31 00:14:14 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535635922' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 00:14:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 00:14:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2278223834' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 00:14:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 31 00:14:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169471401' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 00:14:15 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 00:14:15 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554547036' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 00:14:15 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19464 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:15 np0005603435 ceph-mgr[75599]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 00:14:15 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: 2026-01-31T05:14:15.809+0000 7f77961f6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 00:14:16 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 00:14:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264061956' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 00:14:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 31 00:14:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352704283' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 235 heartbeat osd_stat(store_statfs(0x4fb74a000/0x0/0x4ffc00000, data 0x5ea371/0x73e000, compress 0x0/0x0/0x0, omap 0x2be13, meta 0x3d441ed), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611197df400 session 0x56111b64f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611195b3800 session 0x56111b7b0540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111aac9000 session 0x56111a52b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532359 data_alloc: 218103808 data_used: 4679529
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111c913800 session 0x56111b55ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.897014618s of 10.060780525s, submitted: 70
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611195b2800 session 0x56111b195a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611197df400 session 0x56111b55f340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 24141824 heap: 133726208 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111aac9000 session 0x56111b1941c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111ad4c400 session 0x56111a52a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111c997800 session 0x56111981b180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611195b2800 session 0x56111b51dc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x5611197df400 session 0x561119510700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111aac9000 session 0x561119832c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111ad4c400 session 0x561118ca61c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 ms_handle_reset con 0x56111c997400 session 0x56111981bdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 236 handle_osd_map epochs [236,237], i have 237, src has [1,237]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 237 ms_handle_reset con 0x5611195b3800 session 0x56111a5dafc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 237 heartbeat osd_stat(store_statfs(0x4faf80000/0x0/0x4ffc00000, data 0xdb3a50/0xf0a000, compress 0x0/0x0/0x0, omap 0x2c551, meta 0x3d43aaf), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1591955 data_alloc: 218103808 data_used: 4679829
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110379008 unmapped: 27025408 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110395392 unmapped: 27009024 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 239 ms_handle_reset con 0x5611195b2800 session 0x56111ab3ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 26992640 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 239 heartbeat osd_stat(store_statfs(0x4faf7b000/0x0/0x4ffc00000, data 0xdb54eb/0xf0d000, compress 0x0/0x0/0x0, omap 0x2c72f, meta 0x3d438d1), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596400 data_alloc: 218103808 data_used: 4679829
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 239 ms_handle_reset con 0x5611197df400 session 0x56111b595880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 239 heartbeat osd_stat(store_statfs(0x4faf79000/0x0/0x4ffc00000, data 0xdb6f8d/0xf11000, compress 0x0/0x0/0x0, omap 0x2caad, meta 0x3d43553), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 110411776 unmapped: 26992640 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112533504 unmapped: 24870912 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.391992569s of 10.703784943s, submitted: 71
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 239 ms_handle_reset con 0x56111c996c00 session 0x56111ae59180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 24600576 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 240 ms_handle_reset con 0x56111c996000 session 0x56111ae58700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 24510464 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111cfd1000 session 0x56111a5db500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111c997000 session 0x56111b51b880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 24510464 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 heartbeat osd_stat(store_statfs(0x4faf72000/0x0/0x4ffc00000, data 0xdba727/0xf18000, compress 0x0/0x0/0x0, omap 0x2d305, meta 0x3d42cfb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1645503 data_alloc: 234881024 data_used: 11610261
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b2800 session 0x56111b51a1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b3800 session 0x56111a5db340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611197df400 session 0x561118ca6a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 24485888 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611197df400 session 0x56111b51a000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b2800 session 0x56111b5941c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b3800 session 0x561119549a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111cfd1000 session 0x56111b594000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111c996000 session 0x56111990ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 20439040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b2800 session 0x56111990fc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611195b3800 session 0x561119832380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x5611197df400 session 0x56111b7b1c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111c996000 session 0x56111b55f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 ms_handle_reset con 0x56111cfd1000 session 0x56111990e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c997000 session 0x56111a5dac40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115884032 unmapped: 21520384 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b3800 session 0x56111b594540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa948000/0x0/0x4ffc00000, data 0x13e4799/0x1544000, compress 0x0/0x0/0x0, omap 0x2d683, meta 0x3d4297d), peers [0,1] op hist [2])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611197df400 session 0x56111b51b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21504000 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b2800 session 0x56111990e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c996000 session 0x56111ab0ac40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 21504000 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1699566 data_alloc: 234881024 data_used: 11622549
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c996000 session 0x5611195116c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b2800 session 0x56111b39c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 16588800 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b3800 session 0x56111b595340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611197df400 session 0x56111ac5b880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c997000 session 0x56111a52afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b2800 session 0x56111b51a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 122249216 unmapped: 15155200 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x5611195b3800 session 0x56111990f180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa241000/0x0/0x4ffc00000, data 0x1adb38a/0x1c3c000, compress 0x0/0x0/0x0, omap 0x2dc52, meta 0x3d423ae), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 14442496 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111cfd1000 session 0x56111b195340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.318803787s of 10.851205826s, submitted: 166
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 ms_handle_reset con 0x56111c996c00 session 0x56111990e8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa242000/0x0/0x4ffc00000, data 0x1adb37a/0x1c3b000, compress 0x0/0x0/0x0, omap 0x2dc52, meta 0x3d423ae), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 128901120 unmapped: 8503296 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 242 handle_osd_map epochs [242,243], i have 243, src has [1,243]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111b79e000 session 0x56111990f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x5611195b2800 session 0x56111b55ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa242000/0x0/0x4ffc00000, data 0x1adb37a/0x1c3b000, compress 0x0/0x0/0x0, omap 0x2dc52, meta 0x3d423ae), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1791894 data_alloc: 234881024 data_used: 19581589
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x5611195b3800 session 0x56111b55f6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1adcf40/0x1c3d000, compress 0x0/0x0/0x0, omap 0x2e575, meta 0x3d41a8b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126451712 unmapped: 10952704 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1793684 data_alloc: 234881024 data_used: 19589781
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c996c00 session 0x56111981a1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1adcf40/0x1c3d000, compress 0x0/0x0/0x0, omap 0x2e575, meta 0x3d41a8b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c557000 session 0x5611187a61c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111cfd1000 session 0x56111a5db6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x5611195b2800 session 0x56111b64ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126607360 unmapped: 10797056 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x5611195b3800 session 0x56111b7b0000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1adcf40/0x1c3d000, compress 0x0/0x0/0x0, omap 0x2e575, meta 0x3d41a8b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 10780672 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x224d15a/0x23b5000, compress 0x0/0x0/0x0, omap 0x2e9f9, meta 0x3d41607), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c557000 session 0x56111ae59c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c996c00 session 0x5611187a6380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 ms_handle_reset con 0x56111c556c00 session 0x56111981b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 132186112 unmapped: 5218304 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 244 ms_handle_reset con 0x5611195b3800 session 0x561119510c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.004192352s of 10.381390572s, submitted: 160
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 244 ms_handle_reset con 0x56111c996c00 session 0x56111b51d500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 244 ms_handle_reset con 0x56111c557000 session 0x561118b87dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130138112 unmapped: 7266304 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 244 ms_handle_reset con 0x5611195b2800 session 0x561119832700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f98d3000/0x0/0x4ffc00000, data 0x244fcc0/0x25b9000, compress 0x0/0x0/0x0, omap 0x2ec0e, meta 0x3d413f2), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 7184384 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1896278 data_alloc: 234881024 data_used: 21555861
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x56111c556800 session 0x5611187a7880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x5611195b2800 session 0x56111b7b08c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f98c9000/0x0/0x4ffc00000, data 0x24532f7/0x25bf000, compress 0x0/0x0/0x0, omap 0x2f2a8, meta 0x3d40d58), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 7634944 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x56111a4ec400 session 0x561119833a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f98c9000/0x0/0x4ffc00000, data 0x24532f7/0x25bf000, compress 0x0/0x0/0x0, omap 0x2f2a8, meta 0x3d40d58), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x56111b193400 session 0x561118ca6380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 246 ms_handle_reset con 0x56111b7af400 session 0x56111ae59dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 129875968 unmapped: 7528448 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 247 ms_handle_reset con 0x56111a592400 session 0x56111b51bc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 247 ms_handle_reset con 0x5611195b2800 session 0x56111b51ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f98cb000/0x0/0x4ffc00000, data 0x2454e11/0x25bf000, compress 0x0/0x0/0x0, omap 0x2f610, meta 0x3d409f0), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 129900544 unmapped: 7503872 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x561118cfd400 session 0x56111a52a000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x56111b193400 session 0x56111b39ca80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130072576 unmapped: 7331840 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x56111b7aec00 session 0x56111a52ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x56111a4ec400 session 0x56111b51aa80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130072576 unmapped: 7331840 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1899906 data_alloc: 234881024 data_used: 21556462
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 248 ms_handle_reset con 0x561118cfd400 session 0x56111990f340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 248 heartbeat osd_stat(store_statfs(0x4f98a4000/0x0/0x4ffc00000, data 0x2478e2a/0x25e3000, compress 0x0/0x0/0x0, omap 0x2f7b8, meta 0x3d40848), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 7315456 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 248 handle_osd_map epochs [248,249], i have 249, src has [1,249]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130088960 unmapped: 7315456 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 249 ms_handle_reset con 0x5611195b2800 session 0x56111b55e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 249 ms_handle_reset con 0x56111b7aec00 session 0x56111b595a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 249 ms_handle_reset con 0x56111b193400 session 0x56111981b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 7290880 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f98a4000/0x0/0x4ffc00000, data 0x247aa1a/0x25e6000, compress 0x0/0x0/0x0, omap 0x2faa3, meta 0x3d4055d), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 249 ms_handle_reset con 0x56111b7af400 session 0x56111a5db880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 7290880 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.307342529s of 11.399452209s, submitted: 111
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131178496 unmapped: 6225920 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904215 data_alloc: 234881024 data_used: 21565239
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131178496 unmapped: 6225920 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 250 ms_handle_reset con 0x5611195b2800 session 0x56111ae58e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 6201344 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 250 ms_handle_reset con 0x561118cfd400 session 0x56111b7b16c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 6201344 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f989a000/0x0/0x4ffc00000, data 0x2482c9b/0x25f0000, compress 0x0/0x0/0x0, omap 0x2fc4b, meta 0x3d403b5), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 6201344 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 251 ms_handle_reset con 0x56111b7aec00 session 0x56111b51afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 252 ms_handle_reset con 0x56111b193400 session 0x5611187a6700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 6168576 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1911123 data_alloc: 234881024 data_used: 21565239
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 253 ms_handle_reset con 0x5611197df400 session 0x56111b64e8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 253 ms_handle_reset con 0x56111c996000 session 0x56111ac5ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 254 ms_handle_reset con 0x56111a592000 session 0x56111ab0bdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 127754240 unmapped: 9650176 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 254 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 127770624 unmapped: 9633792 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 254 ms_handle_reset con 0x56111b193400 session 0x56111b7b1880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 255 ms_handle_reset con 0x56111a593c00 session 0x56111b64efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 10125312 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 255 heartbeat osd_stat(store_statfs(0x4fa87e000/0x0/0x4ffc00000, data 0x149bf34/0x160e000, compress 0x0/0x0/0x0, omap 0x301dd, meta 0x3d3fe23), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 255 ms_handle_reset con 0x56111aac9000 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 255 ms_handle_reset con 0x56111ad4c400 session 0x561119548a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 255 heartbeat osd_stat(store_statfs(0x4fa877000/0x0/0x4ffc00000, data 0x149eafc/0x1612000, compress 0x0/0x0/0x0, omap 0x30385, meta 0x3d3fc7b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 10125312 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 255 ms_handle_reset con 0x56111b7aec00 session 0x56111b64ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x56111a592000 session 0x561118d89180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x5611195b2800 session 0x56111ae588c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x561118cfd400 session 0x56111b51a380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x56111a592000 session 0x56111b64f6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1636915 data_alloc: 218103808 data_used: 4681898
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.910606384s of 11.116304398s, submitted: 128
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 256 ms_handle_reset con 0x56111ad4c400 session 0x56111b595a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 257 ms_handle_reset con 0x56111b7aec00 session 0x56111a52ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 258 ms_handle_reset con 0x56111a593c00 session 0x56111b39cfc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 258 ms_handle_reset con 0x56111aac9000 session 0x56111ac5ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 258 heartbeat osd_stat(store_statfs(0x4fb6ff000/0x0/0x4ffc00000, data 0x612324/0x789000, compress 0x0/0x0/0x0, omap 0x30863, meta 0x3d3f79d), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 17522688 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119947264 unmapped: 17457152 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1646575 data_alloc: 218103808 data_used: 4691126
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 17391616 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 260 ms_handle_reset con 0x56111a593c00 session 0x561119548a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 260 ms_handle_reset con 0x56111a592000 session 0x56111ae58380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 17326080 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 17317888 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 262 ms_handle_reset con 0x561118cfd400 session 0x56111a5da700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 262 ms_handle_reset con 0x56111ad4c400 session 0x56111a52aa80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 17743872 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 262 heartbeat osd_stat(store_statfs(0x4fb6f6000/0x0/0x4ffc00000, data 0x61928a/0x792000, compress 0x0/0x0/0x0, omap 0x30d9b, meta 0x3d3f265), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 263 ms_handle_reset con 0x56111b7aec00 session 0x56111b595500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 17735680 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 263 ms_handle_reset con 0x561118cfd400 session 0x56111a5dba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1656181 data_alloc: 218103808 data_used: 4691584
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 17735680 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.941582203s of 10.156404495s, submitted: 118
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 263 heartbeat osd_stat(store_statfs(0x4fb6f4000/0x0/0x4ffc00000, data 0x61b353/0x796000, compress 0x0/0x0/0x0, omap 0x30f43, meta 0x3d3f0bd), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 17735680 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 17735680 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 17727488 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 264 ms_handle_reset con 0x56111a592000 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 264 ms_handle_reset con 0x56111a593c00 session 0x56111ac5b500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 264 ms_handle_reset con 0x56111aac9000 session 0x5611195496c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 19423232 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1661431 data_alloc: 218103808 data_used: 4691584
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 19423232 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 265 ms_handle_reset con 0x561118cfd400 session 0x561118ca6540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 19415040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 265 heartbeat osd_stat(store_statfs(0x4fb6ef000/0x0/0x4ffc00000, data 0x61e768/0x79d000, compress 0x0/0x0/0x0, omap 0x3067f, meta 0x3d3f981), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 19415040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 19415040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 19415040 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1665239 data_alloc: 218103808 data_used: 4691584
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 265 ms_handle_reset con 0x56111a593c00 session 0x56111b7b1180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117997568 unmapped: 19406848 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.216315269s of 10.641911507s, submitted: 107
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 266 ms_handle_reset con 0x56111b7aec00 session 0x561119832e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118005760 unmapped: 19398656 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 266 ms_handle_reset con 0x56111b193400 session 0x56111b55e8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 266 heartbeat osd_stat(store_statfs(0x4fb6e9000/0x0/0x4ffc00000, data 0x62039e/0x7a1000, compress 0x0/0x0/0x0, omap 0x30817, meta 0x3d3f7e9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 267 ms_handle_reset con 0x56111ad4c400 session 0x56111b51d6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 267 ms_handle_reset con 0x56111a592000 session 0x56111b55e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 267 ms_handle_reset con 0x56111c996000 session 0x56111990e380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19341312 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 19333120 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x561118cfd400 session 0x56111981a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 heartbeat osd_stat(store_statfs(0x4fb6e3000/0x0/0x4ffc00000, data 0x621f72/0x7a4000, compress 0x0/0x0/0x0, omap 0x3066f, meta 0x3d3f991), peers [0,1] op hist [1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x56111a593c00 session 0x56111b5956c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118112256 unmapped: 19292160 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673797 data_alloc: 218103808 data_used: 4691584
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 heartbeat osd_stat(store_statfs(0x4fb6e5000/0x0/0x4ffc00000, data 0x623b60/0x7a5000, compress 0x0/0x0/0x0, omap 0x2f9d7, meta 0x3d40629), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118112256 unmapped: 19292160 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x56111b193400 session 0x56111981b500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 16556032 heap: 137404416 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x561118cfd400 session 0x56111b51ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x56111a592000 session 0x56111a5da000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 27934720 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 ms_handle_reset con 0x56111c996000 session 0x56111a52ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 27901952 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 269 ms_handle_reset con 0x56111a593c00 session 0x56111ab0b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 269 ms_handle_reset con 0x56111b334000 session 0x56111b39c700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 27844608 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1732110 data_alloc: 218103808 data_used: 4692253
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 269 ms_handle_reset con 0x56111b334000 session 0x56111b55fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 270 ms_handle_reset con 0x56111b7aec00 session 0x56111724ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 270 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 270 heartbeat osd_stat(store_statfs(0x4faf0c000/0x0/0x4ffc00000, data 0xdfa728/0xf7e000, compress 0x0/0x0/0x0, omap 0x2ff96, meta 0x3d4006a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 270 ms_handle_reset con 0x56111a592000 session 0x56111b51c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 27844608 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111a593c00 session 0x56111a5dae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x561118cfd400 session 0x56111b55e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 27836416 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.808876991s of 11.294489861s, submitted: 125
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111a592000 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 27836416 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 271 heartbeat osd_stat(store_statfs(0x4faf00000/0x0/0x4ffc00000, data 0xdfdfc4/0xf88000, compress 0x0/0x0/0x0, omap 0x303ca, meta 0x3d3fc36), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 27836416 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111b7aec00 session 0x56111a52a1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111c996000 session 0x56111ab3ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 271 ms_handle_reset con 0x56111b334400 session 0x561119511880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x561118cfd400 session 0x561118ca6e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111c996000 session 0x56111ac5ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111a592000 session 0x56111b39cc40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b335000 session 0x56111b51d340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b334c00 session 0x56111b39ce00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b7aec00 session 0x56111b55f6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b334000 session 0x56111a5da700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x561118cfd400 session 0x56111981b500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111a592000 session 0x56111981a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111b335000 session 0x561119832e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 ms_handle_reset con 0x56111c996000 session 0x56111b55f340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 26796032 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1789065 data_alloc: 218103808 data_used: 4692968
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x56111a592000 session 0x56111b64f500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 26796032 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x56111b334000 session 0x56111b51c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x56111b334800 session 0x561119548a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x561118cfd400 session 0x561118ca6540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x561118cfd400 session 0x56111a52afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119078912 unmapped: 26796032 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 ms_handle_reset con 0x56111a592000 session 0x56111b39c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 26779648 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 heartbeat osd_stat(store_statfs(0x4fa9ee000/0x0/0x4ffc00000, data 0x130e3be/0x149c000, compress 0x0/0x0/0x0, omap 0x305e4, meta 0x3d3fa1c), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 273 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 274 ms_handle_reset con 0x56111b334000 session 0x56111a52aa80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 274 ms_handle_reset con 0x56111b334800 session 0x56111b7b1340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 26779648 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1794651 data_alloc: 218103808 data_used: 4694833
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 275 ms_handle_reset con 0x56111c996000 session 0x56111b51d180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 25731072 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 275 ms_handle_reset con 0x561118cfd400 session 0x56111ae58380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 25731072 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 275 heartbeat osd_stat(store_statfs(0x4fa9e9000/0x0/0x4ffc00000, data 0x130fe8c/0x14a1000, compress 0x0/0x0/0x0, omap 0x307c5, meta 0x3d3f83b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 26140672 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.466961861s of 10.023332596s, submitted: 90
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 26009600 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 26009600 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 276 ms_handle_reset con 0x56111b7aec00 session 0x56111b39d180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1833274 data_alloc: 234881024 data_used: 9608086
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 277 ms_handle_reset con 0x56111b335400 session 0x56111724ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 277 ms_handle_reset con 0x56111a592000 session 0x56111b64ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 26001408 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 26001408 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 277 heartbeat osd_stat(store_statfs(0x4fa9e2000/0x0/0x4ffc00000, data 0x1313523/0x14a6000, compress 0x0/0x0/0x0, omap 0x30a2c, meta 0x3d3f5d4), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 277 ms_handle_reset con 0x56111b335800 session 0x56111b7b1c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 277 heartbeat osd_stat(store_statfs(0x4fa9e1000/0x0/0x4ffc00000, data 0x1313533/0x14a7000, compress 0x0/0x0/0x0, omap 0x30a2c, meta 0x3d3f5d4), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1836723 data_alloc: 234881024 data_used: 9608086
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 278 ms_handle_reset con 0x561118cfd400 session 0x56111b64ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 278 ms_handle_reset con 0x56111b335c00 session 0x56111ae59c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 278 ms_handle_reset con 0x56111a592000 session 0x56111b51a8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 25976832 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 278 ms_handle_reset con 0x56111b335400 session 0x56111b594700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 21643264 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 278 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 279 ms_handle_reset con 0x56111b7aec00 session 0x561118b87a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.723967552s of 10.265779495s, submitted: 127
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 279 heartbeat osd_stat(store_statfs(0x4fa3b1000/0x0/0x4ffc00000, data 0x193f0c1/0x1ad3000, compress 0x0/0x0/0x0, omap 0x30d74, meta 0x3d3f28c), peers [0,1] op hist [0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 21372928 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 279 heartbeat osd_stat(store_statfs(0x4fa264000/0x0/0x4ffc00000, data 0x1a83cb1/0x1c18000, compress 0x0/0x0/0x0, omap 0x30dfc, meta 0x3d3f204), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 279 ms_handle_reset con 0x56111b335800 session 0x561118ca68c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 19808256 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1891109 data_alloc: 234881024 data_used: 10620151
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 279 ms_handle_reset con 0x56111a592000 session 0x5611195116c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 279 ms_handle_reset con 0x561118cfd400 session 0x56111b39c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 21446656 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124887040 unmapped: 20987904 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 280 ms_handle_reset con 0x56111b335c00 session 0x561118b876c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 281 ms_handle_reset con 0x56111b7ad400 session 0x56111b7b0a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 20971520 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 282 ms_handle_reset con 0x56111b335400 session 0x56111a52afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 282 heartbeat osd_stat(store_statfs(0x4fa351000/0x0/0x4ffc00000, data 0x19a3a52/0x1b39000, compress 0x0/0x0/0x0, omap 0x311be, meta 0x3d3ee42), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 282 ms_handle_reset con 0x561118cfd400 session 0x56111990efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 20971520 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 20971520 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1897339 data_alloc: 234881024 data_used: 10706766
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124903424 unmapped: 20971520 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 283 ms_handle_reset con 0x56111a592000 session 0x56111981a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 20840448 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 20701184 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 20701184 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 284 heartbeat osd_stat(store_statfs(0x4fa326000/0x0/0x4ffc00000, data 0x19cb123/0x1b64000, compress 0x0/0x0/0x0, omap 0x313d8, meta 0x3d3ec28), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 284 ms_handle_reset con 0x56111b335800 session 0x56111990ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.474483013s of 11.017654419s, submitted: 121
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124911616 unmapped: 20963328 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 285 ms_handle_reset con 0x56111b335c00 session 0x56111a5dba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1905927 data_alloc: 234881024 data_used: 10706766
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 19914752 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 19914752 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 286 heartbeat osd_stat(store_statfs(0x4fa31f000/0x0/0x4ffc00000, data 0x19ce75c/0x1b69000, compress 0x0/0x0/0x0, omap 0x315f2, meta 0x3d3ea0e), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 19849216 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 19849216 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 286 ms_handle_reset con 0x561118cfd400 session 0x56111b39c700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 19849216 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1911086 data_alloc: 234881024 data_used: 10706766
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 19849216 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 286 ms_handle_reset con 0x56111b335400 session 0x56111ac5ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 286 handle_osd_map epochs [286,287], i have 287, src has [1,287]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 287 ms_handle_reset con 0x56111a592000 session 0x56111b51a700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 287 heartbeat osd_stat(store_statfs(0x4fa304000/0x0/0x4ffc00000, data 0x19e9324/0x1b86000, compress 0x0/0x0/0x0, omap 0x3179a, meta 0x3d3e866), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 19841024 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 287 ms_handle_reset con 0x56111b7af800 session 0x56111ae58000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 19841024 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 287 ms_handle_reset con 0x56111aac9c00 session 0x56111b55fdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x56111aac9400 session 0x56111b7b01c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x56111b335800 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 21389312 heap: 145874944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x56111a592000 session 0x56111a52ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 288 ms_handle_reset con 0x56111b335400 session 0x561119548a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.751646996s of 10.096765518s, submitted: 118
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 289 ms_handle_reset con 0x561118cfd400 session 0x56111b39ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 289 ms_handle_reset con 0x56111aac9400 session 0x56111ae59340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 37920768 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111b335800 session 0x56111990e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111a592000 session 0x561119511c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2105559 data_alloc: 234881024 data_used: 17098094
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 37863424 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111b7af800 session 0x56111b64e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111a592000 session 0x56111b7b0fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x561118cfd400 session 0x561119511880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111aac9400 session 0x56111b51bdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 heartbeat osd_stat(store_statfs(0x4f8895000/0x0/0x4ffc00000, data 0x3450a38/0x35f2000, compress 0x0/0x0/0x0, omap 0x31be4, meta 0x3d3e41c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111b335800 session 0x56111b55e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133234688 unmapped: 38125568 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111aac8000 session 0x56111b64fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 ms_handle_reset con 0x56111af3c400 session 0x56111b64e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133234688 unmapped: 38125568 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 38084608 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131006464 unmapped: 40353792 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 291 ms_handle_reset con 0x561118cfd400 session 0x56111b595a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2104000 data_alloc: 234881024 data_used: 17099004
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111a592000 session 0x56111b7b0700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 heartbeat osd_stat(store_statfs(0x4f8894000/0x0/0x4ffc00000, data 0x345266e/0x35f6000, compress 0x0/0x0/0x0, omap 0x32024, meta 0x3d3dfdc), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111aac9400 session 0x561119510380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111b335800 session 0x56111b51c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x561118cfd400 session 0x56111a52b880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111a592000 session 0x56111b7b0c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111aac9400 session 0x56111b39cfc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111af3d000 session 0x561119511c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111af3c400 session 0x56111b55f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x561118cfd400 session 0x5611187a6000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111a592000 session 0x56111b51cfc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111aac9400 session 0x56111ae59a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 ms_handle_reset con 0x56111af3c400 session 0x561118ca6fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 293 ms_handle_reset con 0x56111af3d000 session 0x56111b64efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 293 ms_handle_reset con 0x56111af3d400 session 0x561118b87a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 293 ms_handle_reset con 0x561118cfd400 session 0x56111b39c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 293 heartbeat osd_stat(store_statfs(0x4f888b000/0x0/0x4ffc00000, data 0x3455ed0/0x35fe000, compress 0x0/0x0/0x0, omap 0x32478, meta 0x3d3db88), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2116506 data_alloc: 234881024 data_used: 17099589
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.974984169s of 11.310605049s, submitted: 105
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131022848 unmapped: 40337408 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 294 ms_handle_reset con 0x56111a592000 session 0x56111b55ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131350528 unmapped: 40009728 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 294 ms_handle_reset con 0x56111aac9400 session 0x56111b51a8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 39747584 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133627904 unmapped: 37732352 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 294 ms_handle_reset con 0x56111af3c400 session 0x561119511340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 295 ms_handle_reset con 0x561118cfd400 session 0x5611187a7180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 27664384 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 295 ms_handle_reset con 0x56111a592000 session 0x56111b39c1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2201724 data_alloc: 251658240 data_used: 29287354
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 296 ms_handle_reset con 0x56111aac9400 session 0x56111990ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 296 ms_handle_reset con 0x56111af3d400 session 0x56111a5da000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 27648000 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 296 heartbeat osd_stat(store_statfs(0x4f885f000/0x0/0x4ffc00000, data 0x347efda/0x362b000, compress 0x0/0x0/0x0, omap 0x31fcc, meta 0x3d3e034), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 27615232 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111aedf800 session 0x56111a52afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 27557888 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x561118cfd400 session 0x56111b64ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111a592000 session 0x56111a52a1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 27557888 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111aac9400 session 0x56111f8628c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f885a000/0x0/0x4ffc00000, data 0x3480b68/0x362d000, compress 0x0/0x0/0x0, omap 0x32174, meta 0x3d3de8c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f885a000/0x0/0x4ffc00000, data 0x3480b68/0x362d000, compress 0x0/0x0/0x0, omap 0x32174, meta 0x3d3de8c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 27557888 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111af3d400 session 0x561118d896c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 ms_handle_reset con 0x56111a4e8800 session 0x56111ab0b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2211778 data_alloc: 251658240 data_used: 29279747
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 144859136 unmapped: 26501120 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.092938423s of 11.034484863s, submitted: 80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 144891904 unmapped: 26468352 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 299 ms_handle_reset con 0x56111a4e8800 session 0x56111f863340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152649728 unmapped: 18710528 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 299 ms_handle_reset con 0x561118cfd400 session 0x56111f863180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155697152 unmapped: 15663104 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 299 ms_handle_reset con 0x56111a592000 session 0x56111b51da40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111aac9400 session 0x561118b86c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152633344 unmapped: 18726912 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 heartbeat osd_stat(store_statfs(0x4f7e14000/0x0/0x4ffc00000, data 0x3ec71f5/0x4078000, compress 0x0/0x0/0x0, omap 0x3290b, meta 0x3d3d6f5), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111af3d400 session 0x56111a52bdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x561118cfd400 session 0x561118d88c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111a4e8800 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2325716 data_alloc: 251658240 data_used: 36201747
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152690688 unmapped: 18669568 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111a592000 session 0x56111ab0bc00
Jan 31 00:14:16 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19470 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x56111aac9400 session 0x56111ae59880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x5611197c0800 session 0x56111b39cc40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152854528 unmapped: 18505728 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 18497536 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 ms_handle_reset con 0x561118cfd400 session 0x56111f863880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 301 ms_handle_reset con 0x56111a4e8800 session 0x56111a5dae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153157632 unmapped: 18202624 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 301 ms_handle_reset con 0x56111a592000 session 0x56111b1941c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 18169856 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 302 ms_handle_reset con 0x56111aac9400 session 0x56111b195a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2287262 data_alloc: 251658240 data_used: 36603155
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153214976 unmapped: 18145280 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 302 ms_handle_reset con 0x56111af3c000 session 0x56111ab3ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x379a51b/0x394d000, compress 0x0/0x0/0x0, omap 0x3374b, meta 0x3d3c8b5), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 302 ms_handle_reset con 0x561118cfd400 session 0x56111990f6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 18128896 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x379a51b/0x394d000, compress 0x0/0x0/0x0, omap 0x3395e, meta 0x3d3c6a2), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.366439819s of 10.909622192s, submitted: 157
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 302 ms_handle_reset con 0x56111a4e8800 session 0x5611198328c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 18128896 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 18128896 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 18128896 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f853f000/0x0/0x4ffc00000, data 0x379a51b/0x394d000, compress 0x0/0x0/0x0, omap 0x3398b, meta 0x3d3c675), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 303 ms_handle_reset con 0x56111a592000 session 0x56111b51ca80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2295211 data_alloc: 251658240 data_used: 36604353
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153255936 unmapped: 18104320 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 304 ms_handle_reset con 0x56111aac9400 session 0x56111b7b0700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153255936 unmapped: 18104320 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153272320 unmapped: 18087936 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 305 ms_handle_reset con 0x56111af3c000 session 0x56111990ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f8539000/0x0/0x4ffc00000, data 0x379db6e/0x3953000, compress 0x0/0x0/0x0, omap 0x33f67, meta 0x3d3c099), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153288704 unmapped: 18071552 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 305 ms_handle_reset con 0x561118cfd400 session 0x56111a52ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153296896 unmapped: 18063360 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2300551 data_alloc: 251658240 data_used: 36605210
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 306 ms_handle_reset con 0x56111a4e8800 session 0x561118d88380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f8536000/0x0/0x4ffc00000, data 0x379f75e/0x3956000, compress 0x0/0x0/0x0, omap 0x341c0, meta 0x3d3be40), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 306 ms_handle_reset con 0x56111a592000 session 0x56111b64e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f8531000/0x0/0x4ffc00000, data 0x37a11f9/0x3959000, compress 0x0/0x0/0x0, omap 0x34449, meta 0x3d3bbb7), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 18055168 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 306 handle_osd_map epochs [306,307], i have 307, src has [1,307]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.094177246s of 12.924662590s, submitted: 74
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2303321 data_alloc: 251658240 data_used: 36605823
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153370624 unmapped: 17989632 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 307 ms_handle_reset con 0x56111aac9400 session 0x561119511880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153370624 unmapped: 17989632 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 307 ms_handle_reset con 0x56111af3c000 session 0x56111990e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153444352 unmapped: 17915904 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 308 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 17899520 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 308 heartbeat osd_stat(store_statfs(0x4f8529000/0x0/0x4ffc00000, data 0x37a4969/0x395f000, compress 0x0/0x0/0x0, omap 0x3480f, meta 0x3d3b7f1), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 308 handle_osd_map epochs [309,309], i have 309, src has [1,309]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 157671424 unmapped: 13688832 heap: 171360256 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2513509 data_alloc: 251658240 data_used: 36614015
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162971648 unmapped: 41992192 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f4d2a000/0x0/0x4ffc00000, data 0x6fa6559/0x7162000, compress 0x0/0x0/0x0, omap 0x34895, meta 0x3d3b76b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155738112 unmapped: 49225728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 309 ms_handle_reset con 0x56111aac9400 session 0x56111b39c8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 160006144 unmapped: 44957696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 310 ms_handle_reset con 0x5611213e4000 session 0x56111b55fdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 157048832 unmapped: 47915008 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 310 ms_handle_reset con 0x5611213e4400 session 0x561119833a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 161521664 unmapped: 43442176 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.858100891s of 10.006391525s, submitted: 77
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x5611213e4800 session 0x561119549c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3310169 data_alloc: 251658240 data_used: 38412788
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 161546240 unmapped: 43417600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111af3dc00 session 0x56111b51a700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111af3d800 session 0x56111b7b0000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 43343872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 heartbeat osd_stat(store_statfs(0x4eb91c000/0x0/0x4ffc00000, data 0x103aed21/0x1056e000, compress 0x0/0x0/0x0, omap 0x34a3d, meta 0x3d3b5c3), peers [0,1] op hist [0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x561118cfd400 session 0x56111b51a380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111aac9400 session 0x56111b39ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x5611213e4000 session 0x56111b55efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 161759232 unmapped: 43204608 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166166528 unmapped: 38797312 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 heartbeat osd_stat(store_statfs(0x4e805a000/0x0/0x4ffc00000, data 0x13c74c9f/0x13e31000, compress 0x0/0x0/0x0, omap 0x34139, meta 0x3d3bec7), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111a4e8800 session 0x561118b86540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 ms_handle_reset con 0x56111a592000 session 0x56111b64e380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 46972928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970671 data_alloc: 251658240 data_used: 38283634
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158072832 unmapped: 46891008 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 312 ms_handle_reset con 0x561118cfd400 session 0x561118b86c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158162944 unmapped: 46800896 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 312 ms_handle_reset con 0x56111aac9400 session 0x56111ab0ac40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 312 ms_handle_reset con 0x56111af3d800 session 0x56111b55fdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145809408 unmapped: 59154432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 313 ms_handle_reset con 0x561118cfd400 session 0x56111b64e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 313 ms_handle_reset con 0x56111a4e8800 session 0x56111b7b01c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 314 ms_handle_reset con 0x56111a592000 session 0x56111990fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 59056128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 59056128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f9110000/0x0/0x4ffc00000, data 0x1a1b1f2/0x1bd9000, compress 0x0/0x0/0x0, omap 0x34595, meta 0x4edba6b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2136432 data_alloc: 234881024 data_used: 17100130
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.281840801s of 10.235033989s, submitted: 247
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x56111aac9400 session 0x56111b39c8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 59056128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x56111af3d800 session 0x5611187a7880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 59056128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x56111b334000 session 0x56111b64f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x56111b334800 session 0x56111b7b0e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 315 ms_handle_reset con 0x561118cfd400 session 0x56111f862000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 70467584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 70467584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f9ce4000/0x0/0x4ffc00000, data 0xe49c34/0x1006000, compress 0x0/0x0/0x0, omap 0x34595, meta 0x4edba6b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 70467584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a4e8800 session 0x56111f863340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2025554 data_alloc: 218103808 data_used: 4718689
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a592000 session 0x561119549500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x561118cfd400 session 0x56111f8636c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9c60000/0x0/0x4ffc00000, data 0xecb72b/0x1089000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2025062 data_alloc: 218103808 data_used: 4718689
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 70385664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.772894859s of 12.040178299s, submitted: 91
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a4e8800 session 0x56111b194700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334000 session 0x56111ab3ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334800 session 0x561119549a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111aac9400 session 0x56111f862380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x561118cfd400 session 0x56111b64fdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a4e8800 session 0x56111b55f180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9c60000/0x0/0x4ffc00000, data 0xecb72b/0x1089000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334000 session 0x56111ae59880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334800 session 0x561119832540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111af3dc00 session 0x561118d896c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2078588 data_alloc: 218103808 data_used: 4722687
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x561118cfd400 session 0x56111990f500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111a4e8800 session 0x56111ae58fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334000 session 0x56111a52ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 ms_handle_reset con 0x56111b334800 session 0x561118ca6e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2078588 data_alloc: 218103808 data_used: 4722687
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 70672384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128124 data_alloc: 234881024 data_used: 13115391
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128124 data_alloc: 234881024 data_used: 13115391
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 67264512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.599102020s of 18.728061676s, submitted: 15
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9415000/0x0/0x4ffc00000, data 0x171872b/0x18d6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [0,0,0,0,5])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 62824448 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8ad7000/0x0/0x4ffc00000, data 0x205772b/0x2215000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 62824448 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2189224 data_alloc: 234881024 data_used: 13430783
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a58000/0x0/0x4ffc00000, data 0x20d672b/0x2294000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a58000/0x0/0x4ffc00000, data 0x20d672b/0x2294000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a58000/0x0/0x4ffc00000, data 0x20d672b/0x2294000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a58000/0x0/0x4ffc00000, data 0x20d672b/0x2294000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 62603264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 63455232 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2186184 data_alloc: 234881024 data_used: 13430783
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 63455232 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 63455232 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 63455232 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.351999283s of 12.158617020s, submitted: 76
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f8a36000/0x0/0x4ffc00000, data 0x20f872b/0x22b6000, compress 0x0/0x0/0x0, omap 0x3473d, meta 0x4edb8c3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 63193088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 317 ms_handle_reset con 0x5611213e5000 session 0x56111b55f6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 317 ms_handle_reset con 0x5611213e4c00 session 0x56111b64fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 63193088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 318 ms_handle_reset con 0x561118cfd400 session 0x56111b594fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2193276 data_alloc: 234881024 data_used: 13430783
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141787136 unmapped: 63176704 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 318 heartbeat osd_stat(store_statfs(0x4f8a27000/0x0/0x4ffc00000, data 0x2100e63/0x22c1000, compress 0x0/0x0/0x0, omap 0x348e5, meta 0x4edb71b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141787136 unmapped: 63176704 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 319 ms_handle_reset con 0x56111a4e8800 session 0x56111990fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 64585728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 319 ms_handle_reset con 0x56111b334000 session 0x56111ab0b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 64585728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 320 ms_handle_reset con 0x56111b334800 session 0x56111b194c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 320 ms_handle_reset con 0x56111b334800 session 0x56111f863a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 64577536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 321 ms_handle_reset con 0x561118cfd400 session 0x56111b595dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 321 heartbeat osd_stat(store_statfs(0x4f8a23000/0x0/0x4ffc00000, data 0x21045ef/0x22c7000, compress 0x0/0x0/0x0, omap 0x34a8d, meta 0x4edb573), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2201294 data_alloc: 234881024 data_used: 13430881
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 64577536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 321 ms_handle_reset con 0x56111a4e8800 session 0x56111ae58000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 64569344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 322 ms_handle_reset con 0x56111b334000 session 0x56111b55ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 64520192 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 140443648 unmapped: 64520192 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 322 ms_handle_reset con 0x5611213e4400 session 0x56111a5db180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 322 ms_handle_reset con 0x5611213e4800 session 0x56111b64ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.330351830s of 11.612763405s, submitted: 56
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133832704 unmapped: 71131136 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f8a1a000/0x0/0x4ffc00000, data 0x210addf/0x22d0000, compress 0x0/0x0/0x0, omap 0x34c35, meta 0x4edb3cb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 322 ms_handle_reset con 0x561118cfd400 session 0x561118d89dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2053200 data_alloc: 218103808 data_used: 4722687
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133840896 unmapped: 71122944 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 323 ms_handle_reset con 0x56111a4e8800 session 0x56111990f6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 70819840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 70819840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 70819840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f9c28000/0x0/0x4ffc00000, data 0xefb85e/0x10c2000, compress 0x0/0x0/0x0, omap 0x350d1, meta 0x4edaf2f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 70819840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2055272 data_alloc: 218103808 data_used: 4722785
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 323 ms_handle_reset con 0x56111b334800 session 0x56111ac5afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f9c2a000/0x0/0x4ffc00000, data 0xefb85e/0x10c2000, compress 0x0/0x0/0x0, omap 0x350d1, meta 0x4edaf2f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 324 ms_handle_reset con 0x5611213e4400 session 0x561118ca6540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 324 ms_handle_reset con 0x561118cfd400 session 0x56111b55f180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 324 ms_handle_reset con 0x56111a4e8800 session 0x56111b7b01c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 325 ms_handle_reset con 0x56111b334800 session 0x56111a52ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 325 ms_handle_reset con 0x5611213e4400 session 0x56111b195a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 325 handle_osd_map epochs [325,326], i have 326, src has [1,326]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949024200s of 10.366166115s, submitted: 44
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f9c1f000/0x0/0x4ffc00000, data 0xe7ef88/0x1047000, compress 0x0/0x0/0x0, omap 0x35279, meta 0x4edad87), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 326 ms_handle_reset con 0x56111b334000 session 0x56111ae59340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2061850 data_alloc: 218103808 data_used: 4724735
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 326 ms_handle_reset con 0x561118cfd400 session 0x56111b594000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f9cc4000/0x0/0x4ffc00000, data 0xe5cb40/0x1026000, compress 0x0/0x0/0x0, omap 0x35421, meta 0x4edabdf), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2062426 data_alloc: 218103808 data_used: 4726685
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111a4e8800 session 0x56111ae59c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 70803456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111b334800 session 0x56111b39c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4400 session 0x56111b7b0e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4800 session 0x56111b51ce00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x561118cfd400 session 0x56111f863340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111a4e8800 session 0x5611187a7c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 70967296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111b334800 session 0x5611187a7880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4400 session 0x56111981b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 70967296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f938c000/0x0/0x4ffc00000, data 0x1791631/0x195e000, compress 0x0/0x0/0x0, omap 0x34aa7, meta 0x4edb559), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4c00 session 0x56111ab0bc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 70967296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f938c000/0x0/0x4ffc00000, data 0x1791631/0x195e000, compress 0x0/0x0/0x0, omap 0x34aa7, meta 0x4edb559), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 70959104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x561118cfd400 session 0x56111b64f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2126122 data_alloc: 218103808 data_used: 4726685
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f938b000/0x0/0x4ffc00000, data 0x1791641/0x195f000, compress 0x0/0x0/0x0, omap 0x34aa7, meta 0x4edb559), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 70959104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111b334800 session 0x561119549a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111a4e8800 session 0x56111b39c8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.922699928s of 11.104353905s, submitted: 59
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e4400 session 0x56111a5dae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x5611213e5400 session 0x56111b51c700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111a4e8800 session 0x56111ae59c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 ms_handle_reset con 0x56111b334800 session 0x56111ab3ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 70959104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 134012928 unmapped: 70950912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f8e83000/0x0/0x4ffc00000, data 0x1c9b641/0x1e69000, compress 0x0/0x0/0x0, omap 0x3425f, meta 0x4edbda1), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x5611213e5c00 session 0x56111b51ca80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 68509696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 68509696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2210415 data_alloc: 234881024 data_used: 13160349
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 68509696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f8e7e000/0x0/0x4ffc00000, data 0x1c9d1dd/0x1e6c000, compress 0x0/0x0/0x0, omap 0x34407, meta 0x4edbbf9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 68476928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 68476928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc400 session 0x561118d88380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc400 session 0x56111b39c000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 63905792 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x561118cfd400 session 0x56111ab3ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc800 session 0x56111ac5b500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bd400 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bd800 session 0x56111981b880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 66781184 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2473564 data_alloc: 234881024 data_used: 13160381
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f5e3e000/0x0/0x4ffc00000, data 0x4cdd24f/0x4eae000, compress 0x0/0x0/0x0, omap 0x34407, meta 0x4edbbf9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 66781184 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 66781184 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f5e3e000/0x0/0x4ffc00000, data 0x4cdd24f/0x4eae000, compress 0x0/0x0/0x0, omap 0x34407, meta 0x4edbbf9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.840569496s of 11.517079353s, submitted: 68
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 141402112 unmapped: 63561728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 62365696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 61251584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2537594 data_alloc: 234881024 data_used: 14315453
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 61251584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 61251584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f550c000/0x0/0x4ffc00000, data 0x560e24f/0x57df000, compress 0x0/0x0/0x0, omap 0x34407, meta 0x4edbbf9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 147087360 unmapped: 57876480 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 147087360 unmapped: 57876480 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc400 session 0x56111b51c1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc800 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bd400 session 0x56111f863c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 147087360 unmapped: 57876480 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2568826 data_alloc: 234881024 data_used: 19599309
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bd000 session 0x56111a52b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 147218432 unmapped: 57745408 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x561118cfd400 session 0x56111b51a380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b7bc400 session 0x56111b39ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 58613760 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 ms_handle_reset con 0x56111b334800 session 0x56111ac5b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 58613760 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f550d000/0x0/0x4ffc00000, data 0x560e262/0x57df000, compress 0x0/0x0/0x0, omap 0x34662, meta 0x4edb99e), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 328 handle_osd_map epochs [328,329], i have 329, src has [1,329]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.684011459s of 10.954308510s, submitted: 95
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 329 ms_handle_reset con 0x5611213e5c00 session 0x56111b194700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 58613760 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 55148544 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 330 ms_handle_reset con 0x5611197de800 session 0x561119510a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 330 ms_handle_reset con 0x561118cfd400 session 0x56111ae59500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2570698 data_alloc: 234881024 data_used: 24315389
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 330 heartbeat osd_stat(store_statfs(0x4f5a12000/0x0/0x4ffc00000, data 0x51079fa/0x52da000, compress 0x0/0x0/0x0, omap 0x3480a, meta 0x4edb7f6), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 330 handle_osd_map epochs [331,331], i have 331, src has [1,331]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 52436992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 331 ms_handle_reset con 0x5611197de800 session 0x56111ae58a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 52436992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 52420608 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 332 ms_handle_reset con 0x56111b334800 session 0x56111b39d180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 332 ms_handle_reset con 0x56111b7bc400 session 0x56111b51c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 332 ms_handle_reset con 0x5611213e5c00 session 0x561119549c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152567808 unmapped: 52396032 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 333 ms_handle_reset con 0x561118cfd400 session 0x56111b195500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152567808 unmapped: 52396032 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 333 heartbeat osd_stat(store_statfs(0x4f5a05000/0x0/0x4ffc00000, data 0x510cdb2/0x52e3000, compress 0x0/0x0/0x0, omap 0x349b2, meta 0x4edb64e), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x5611197de800 session 0x56111ab3efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f5a05000/0x0/0x4ffc00000, data 0x510cdb2/0x52e3000, compress 0x0/0x0/0x0, omap 0x349b2, meta 0x4edb64e), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2588514 data_alloc: 234881024 data_used: 24315974
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152567808 unmapped: 52396032 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x5611213e4400 session 0x56111b195a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x5611213e5800 session 0x56111f863880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 152633344 unmapped: 52330496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x56111b334800 session 0x561118d896c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 335 ms_handle_reset con 0x5611197de800 session 0x56111b51c700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 56279040 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.563458443s of 10.001652718s, submitted: 179
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158531584 unmapped: 46432256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 44974080 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2576999 data_alloc: 234881024 data_used: 16956388
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f4601000/0x0/0x4ffc00000, data 0x5341e96/0x551b000, compress 0x0/0x0/0x0, omap 0x33b40, meta 0x607c4c0), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 336 ms_handle_reset con 0x5611213e4400 session 0x56111e5cf880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f4601000/0x0/0x4ffc00000, data 0x5341e96/0x551b000, compress 0x0/0x0/0x0, omap 0x33b40, meta 0x607c4c0), peers [0,1] op hist [0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 336 handle_osd_map epochs [337,337], i have 337, src has [1,337]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 336 handle_osd_map epochs [337,337], i have 337, src has [1,337]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 337 ms_handle_reset con 0x5611213e5800 session 0x561118b86000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 45105152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2579971 data_alloc: 234881024 data_used: 16956388
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158941184 unmapped: 46022656 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 338 ms_handle_reset con 0x56111b7bc400 session 0x56111b1948c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158941184 unmapped: 46022656 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158990336 unmapped: 45973504 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x561118cfd400 session 0x561119511340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x5611197de800 session 0x56111b595a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.417187691s of 10.007616043s, submitted: 192
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bc800 session 0x56111f8636c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bd400 session 0x56111f8621c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bc400 session 0x561118b861c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 46211072 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x561118cfd400 session 0x56111990ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f4626000/0x0/0x4ffc00000, data 0x53470ed/0x5524000, compress 0x0/0x0/0x0, omap 0x33f7c, meta 0x607c084), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x5611197de800 session 0x56111b51c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bc800 session 0x56111ae59500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x56111b7bd400 session 0x56111b194700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 45817856 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2591829 data_alloc: 234881024 data_used: 16977453
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 45817856 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f45fa000/0x0/0x4ffc00000, data 0x53710e9/0x554f000, compress 0x0/0x0/0x0, omap 0x3407b, meta 0x607bf85), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 ms_handle_reset con 0x5611213e5800 session 0x56111ae58fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 42377216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 340 ms_handle_reset con 0x5611197de800 session 0x561118d88380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 340 ms_handle_reset con 0x561118cfd400 session 0x56111990f500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 42377216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162603008 unmapped: 42360832 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 340 ms_handle_reset con 0x56111b7bc800 session 0x56111f862540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f45f5000/0x0/0x4ffc00000, data 0x5372ca1/0x5552000, compress 0x0/0x0/0x0, omap 0x3407b, meta 0x607bf85), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162635776 unmapped: 42328064 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 341 ms_handle_reset con 0x56111b7bd400 session 0x56111b51c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2640081 data_alloc: 234881024 data_used: 24678426
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162643968 unmapped: 42319872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 342 ms_handle_reset con 0x5611197dfc00 session 0x56111a52afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 342 heartbeat osd_stat(store_statfs(0x4f45f5000/0x0/0x4ffc00000, data 0x5374720/0x5555000, compress 0x0/0x0/0x0, omap 0x34299, meta 0x607bd67), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 342 ms_handle_reset con 0x561118cfd400 session 0x56111a52ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 342 ms_handle_reset con 0x5611197de800 session 0x56111b55efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 42311680 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 42311680 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 42311680 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 42311680 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.657952309s of 11.761885643s, submitted: 51
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 342 heartbeat osd_stat(store_statfs(0x4f4421000/0x0/0x4ffc00000, data 0x554a300/0x572b000, compress 0x0/0x0/0x0, omap 0x34299, meta 0x607bd67), peers [0,1] op hist [1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2700779 data_alloc: 234881024 data_used: 24791322
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166469632 unmapped: 38494208 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166526976 unmapped: 38436864 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165847040 unmapped: 39116800 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165847040 unmapped: 39116800 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 342 handle_osd_map epochs [342,343], i have 343, src has [1,343]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165847040 unmapped: 39116800 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f3b50000/0x0/0x4ffc00000, data 0x5e1b300/0x5ffc000, compress 0x0/0x0/0x0, omap 0x34299, meta 0x607bd67), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2720333 data_alloc: 234881024 data_used: 25899290
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 39092224 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 39092224 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x56111b7bc800 session 0x56111a5dae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 39092224 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x56111b7bd400 session 0x56111e5ce700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x5611197df800 session 0x56111b55f340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 39714816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x56111ad4d400 session 0x56111ab3ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 39714816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f3b2b000/0x0/0x4ffc00000, data 0x5e3cdf1/0x6021000, compress 0x0/0x0/0x0, omap 0x3430f, meta 0x607bcf1), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721481 data_alloc: 234881024 data_used: 25899290
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.289365768s of 10.548576355s, submitted: 113
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 ms_handle_reset con 0x56111b7ac400 session 0x56111b39c8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 39714816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 39714816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 344 ms_handle_reset con 0x56111b7a6c00 session 0x56111b7b0e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 344 ms_handle_reset con 0x56111b7a7400 session 0x56111f863dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 39542784 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 345 ms_handle_reset con 0x561118cfd400 session 0x56111e5ce540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 345 ms_handle_reset con 0x56111b7ac000 session 0x56111a52b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165421056 unmapped: 39542784 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 346 ms_handle_reset con 0x56111ad4d400 session 0x56111b51a700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 346 ms_handle_reset con 0x56111b7a6c00 session 0x561118b86c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f3b15000/0x0/0x4ffc00000, data 0x5e4c17b/0x6035000, compress 0x0/0x0/0x0, omap 0x34767, meta 0x607b899), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2734496 data_alloc: 234881024 data_used: 26034629
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 346 ms_handle_reset con 0x56111b7ac400 session 0x56111ab0afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 346 handle_osd_map epochs [346,347], i have 347, src has [1,347]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 347 ms_handle_reset con 0x561118cfd400 session 0x56111b64e1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 348 ms_handle_reset con 0x56111ad4d400 session 0x56111b55f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f3b0c000/0x0/0x4ffc00000, data 0x5e4f94d/0x603c000, compress 0x0/0x0/0x0, omap 0x34af5, meta 0x607b50b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165478400 unmapped: 39485440 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2741367 data_alloc: 234881024 data_used: 26034727
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f3b0d000/0x0/0x4ffc00000, data 0x5e5294d/0x603f000, compress 0x0/0x0/0x0, omap 0x34af5, meta 0x607b50b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165511168 unmapped: 39452672 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.281254768s of 10.366102219s, submitted: 43
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 349 ms_handle_reset con 0x56111b7a6c00 session 0x56111b7b0c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165740544 unmapped: 39223296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165896192 unmapped: 39067648 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f3b09000/0x0/0x4ffc00000, data 0x5e544db/0x6041000, compress 0x0/0x0/0x0, omap 0x34b7b, meta 0x607b485), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 349 ms_handle_reset con 0x56111b7ac000 session 0x56111b1941c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f3b08000/0x0/0x4ffc00000, data 0x5e544eb/0x6042000, compress 0x0/0x0/0x0, omap 0x34b7b, meta 0x607b485), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165896192 unmapped: 39067648 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2757064 data_alloc: 251658240 data_used: 27389893
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 350 ms_handle_reset con 0x56111b7a0400 session 0x56111ae59340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 351 ms_handle_reset con 0x56111b7a0400 session 0x56111b51a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x561118cfd400 session 0x56111a5daa80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x56111ad4d400 session 0x56111b39ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x56111b7a6c00 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x56111b7a1000 session 0x56111b7b01c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 352 ms_handle_reset con 0x56111ad4d400 session 0x561119511880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 39059456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 352 heartbeat osd_stat(store_statfs(0x4f3afb000/0x0/0x4ffc00000, data 0x5e59d06/0x604d000, compress 0x0/0x0/0x0, omap 0x3504d, meta 0x607afb3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 353 ms_handle_reset con 0x56111b7a0400 session 0x56111b39d180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 165961728 unmapped: 39002112 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 354 ms_handle_reset con 0x56111b7a6c00 session 0x561118d896c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 354 ms_handle_reset con 0x561118cfd400 session 0x561118b87a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2793907 data_alloc: 251658240 data_used: 27387207
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166035456 unmapped: 38928384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166035456 unmapped: 38928384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.075937271s of 11.210276604s, submitted: 70
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 354 ms_handle_reset con 0x56111b7a0c00 session 0x561119548700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166182912 unmapped: 38780928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 355 ms_handle_reset con 0x561118cfd400 session 0x561118b86000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166191104 unmapped: 38772736 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 355 heartbeat osd_stat(store_statfs(0x4f3aed000/0x0/0x4ffc00000, data 0x632d512/0x605f000, compress 0x0/0x0/0x0, omap 0x35663, meta 0x607a99d), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 355 ms_handle_reset con 0x56111b7a0c00 session 0x56111ae59180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 355 ms_handle_reset con 0x56111b7ac000 session 0x56111b55ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166191104 unmapped: 38772736 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2828665 data_alloc: 251658240 data_used: 27388133
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166191104 unmapped: 38772736 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 356 ms_handle_reset con 0x56111b7a0400 session 0x56111e5cf6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 356 ms_handle_reset con 0x56111b7a6c00 session 0x56111e5ce1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 357 ms_handle_reset con 0x56111b7a6c00 session 0x56111e5cefc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 38748160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 358 ms_handle_reset con 0x561118cfd400 session 0x56111b64ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 358 ms_handle_reset con 0x56111ad4d400 session 0x56111b39d6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f3ae1000/0x0/0x4ffc00000, data 0x6332848/0x6069000, compress 0x0/0x0/0x0, omap 0x3aadb, meta 0x6075525), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 38748160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f3adc000/0x0/0x4ffc00000, data 0x63343e4/0x606c000, compress 0x0/0x0/0x0, omap 0x3aadb, meta 0x6075525), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 38748160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 38748160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 359 ms_handle_reset con 0x56111b7a0400 session 0x56111990e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2839427 data_alloc: 251658240 data_used: 27389319
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 359 ms_handle_reset con 0x56111b7a0c00 session 0x56111f8628c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 38739968 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 359 ms_handle_reset con 0x56111ad4d400 session 0x56111b64e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f3adb000/0x0/0x4ffc00000, data 0x6335fd4/0x606f000, compress 0x0/0x0/0x0, omap 0x3acc3, meta 0x607533d), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 360 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167370752 unmapped: 37593088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 360 ms_handle_reset con 0x56111b7a0400 session 0x56111f862000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.749612808s of 10.231152534s, submitted: 62
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 360 ms_handle_reset con 0x56111b7a6c00 session 0x56111b7b0a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167395328 unmapped: 37568512 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167985152 unmapped: 36978688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 361 ms_handle_reset con 0x561119499c00 session 0x5611187a7c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 36732928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 362 ms_handle_reset con 0x561118cfd400 session 0x56111a5da000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 362 ms_handle_reset con 0x56111ad4d400 session 0x56111ae59180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2854296 data_alloc: 251658240 data_used: 28381050
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 36732928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 362 heartbeat osd_stat(store_statfs(0x4f3ad4000/0x0/0x4ffc00000, data 0x633ae12/0x6076000, compress 0x0/0x0/0x0, omap 0x3b131, meta 0x6074ecf), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 362 handle_osd_map epochs [363,363], i have 363, src has [1,363]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 36700160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168910848 unmapped: 36052992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 364 ms_handle_reset con 0x56111c997c00 session 0x56111ab3ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 364 ms_handle_reset con 0x56111b79cc00 session 0x56111a5da700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 365 ms_handle_reset con 0x56111b7a6c00 session 0x56111b195880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168968192 unmapped: 35995648 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 365 ms_handle_reset con 0x56111b7a0400 session 0x561119548700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 366 ms_handle_reset con 0x561118cfd400 session 0x56111b64f500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f3ac8000/0x0/0x4ffc00000, data 0x6341e52/0x6080000, compress 0x0/0x0/0x0, omap 0x3b501, meta 0x6074aff), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169066496 unmapped: 35897344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2873249 data_alloc: 251658240 data_used: 29939966
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169164800 unmapped: 35799040 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f3ac7000/0x0/0x4ffc00000, data 0x6343919/0x6083000, compress 0x0/0x0/0x0, omap 0x3b53c, meta 0x6074ac4), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169172992 unmapped: 35790848 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 369 ms_handle_reset con 0x56111ad4d400 session 0x56111a5dae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 369 ms_handle_reset con 0x56111b79cc00 session 0x561119511880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 35758080 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.730643272s of 11.094212532s, submitted: 130
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 369 heartbeat osd_stat(store_statfs(0x4f3abf000/0x0/0x4ffc00000, data 0x634708d/0x6087000, compress 0x0/0x0/0x0, omap 0x3b724, meta 0x60748dc), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 172507136 unmapped: 32456704 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 369 ms_handle_reset con 0x56111b7a6c00 session 0x56111b39d6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 369 heartbeat osd_stat(store_statfs(0x4f384f000/0x0/0x4ffc00000, data 0x65bd08d/0x62fd000, compress 0x0/0x0/0x0, omap 0x3b724, meta 0x60748dc), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 369 ms_handle_reset con 0x561118cfd400 session 0x56111f8628c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 369 handle_osd_map epochs [369,370], i have 370, src has [1,370]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2508781 data_alloc: 234881024 data_used: 19538694
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f6386000/0x0/0x4ffc00000, data 0x288ab0e/0x25ca000, compress 0x0/0x0/0x0, omap 0x3b810, meta 0x60747f0), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163274752 unmapped: 41689088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f7582000/0x0/0x4ffc00000, data 0x288ab0e/0x25ca000, compress 0x0/0x0/0x0, omap 0x3b810, meta 0x60747f0), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 370 ms_handle_reset con 0x56111ad4d400 session 0x56111ae58a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b79cc00 session 0x56111e5cf880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163610624 unmapped: 41353216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2516713 data_alloc: 234881024 data_used: 19542692
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f757c000/0x0/0x4ffc00000, data 0x288c5ef/0x25ce000, compress 0x0/0x0/0x0, omap 0x3bdd6, meta 0x607422a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163610624 unmapped: 41353216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a0400 session 0x56111f863880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111c997c00 session 0x56111ab0afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163610624 unmapped: 41353216 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118cfd400 session 0x56111b55e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111ad4d400 session 0x56111990f6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b79cc00 session 0x561118d89180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f7579000/0x0/0x4ffc00000, data 0x288c61e/0x25d1000, compress 0x0/0x0/0x0, omap 0x3bdd6, meta 0x607422a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 41345024 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a0400 session 0x56111b51a700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.235778809s of 10.372652054s, submitted: 81
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 41345024 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118d18800 session 0x56111a52aa80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118cfd400 session 0x56111ae58380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118d18800 session 0x56111990e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 41345024 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111ad4d400 session 0x56111ab3efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2518253 data_alloc: 234881024 data_used: 19546804
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163700736 unmapped: 41263104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b79cc00 session 0x56111b5941c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163700736 unmapped: 41263104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f757e000/0x0/0x4ffc00000, data 0x288c5ef/0x25ce000, compress 0x0/0x0/0x0, omap 0x3bdd6, meta 0x607422a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a0400 session 0x56111b55f180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163725312 unmapped: 41238528 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118cfd400 session 0x56111b594000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163708928 unmapped: 41254912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118d18800 session 0x56111b39c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111ad4d400 session 0x56111a52b880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163725312 unmapped: 41238528 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2515330 data_alloc: 234881024 data_used: 19546788
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163725312 unmapped: 41238528 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b79cc00 session 0x56111b5941c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f757d000/0x0/0x4ffc00000, data 0x288c600/0x25cf000, compress 0x0/0x0/0x0, omap 0x3bf84, meta 0x607407c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.940250397s of 10.053503990s, submitted: 50
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2519443 data_alloc: 234881024 data_used: 19542692
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a0400 session 0x56111b51a8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7ac000 session 0x56111b5948c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x56111b7a1400 session 0x5611187a6000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 ms_handle_reset con 0x561118cfd400 session 0x56111b39ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163741696 unmapped: 41222144 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x561118d18800 session 0x56111b194c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163741696 unmapped: 41222144 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 372 heartbeat osd_stat(store_statfs(0x4f7578000/0x0/0x4ffc00000, data 0x288e1cb/0x25d1000, compress 0x0/0x0/0x0, omap 0x3bf84, meta 0x607407c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x56111ad4d400 session 0x56111990e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x561118cfd400 session 0x56111ab0b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 41172992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x56111b7a1400 session 0x56111b64fc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 372 ms_handle_reset con 0x561118d18800 session 0x56111b55ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2505874 data_alloc: 234881024 data_used: 19477156
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 41172992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 373 ms_handle_reset con 0x56111b7ac000 session 0x56111ae58fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f77eb000/0x0/0x4ffc00000, data 0x2619d77/0x235f000, compress 0x0/0x0/0x0, omap 0x412fc, meta 0x606ed04), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 41172992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f77eb000/0x0/0x4ffc00000, data 0x2619d77/0x235f000, compress 0x0/0x0/0x0, omap 0x412fc, meta 0x606ed04), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 373 ms_handle_reset con 0x56111b79cc00 session 0x56111990f180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 41172992 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x561118d18800 session 0x56111b595a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x561118cfd400 session 0x56111b64f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 44761088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x5611213e4400 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x5611213e5c00 session 0x56111b64fdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.537055969s of 10.864589691s, submitted: 97
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 374 ms_handle_reset con 0x56111b7ac000 session 0x56111990ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 44761088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7a1400 session 0x56111b7b1180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2465158 data_alloc: 234881024 data_used: 14330595
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 44761088 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118cfd400 session 0x56111f862000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118d18800 session 0x56111990e8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7ac000 session 0x561118d89180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153214976 unmapped: 51748864 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 heartbeat osd_stat(store_statfs(0x4f8044000/0x0/0x4ffc00000, data 0x18f851f/0x1b07000, compress 0x0/0x0/0x0, omap 0x417ca, meta 0x606e836), peers [0,1] op hist [0,0,0,0,0,1,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x5611213e5c00 session 0x56111981b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x5611213e4400 session 0x56111e5cea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118cfd400 session 0x56111b7b01c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 51650560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118d18800 session 0x56111ae58380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7a1400 session 0x56111f863880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 51650560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7ac000 session 0x56111b7b0700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x561118cfd400 session 0x56111a5da700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 51650560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2529656 data_alloc: 218103808 data_used: 4760992
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 heartbeat osd_stat(store_statfs(0x4f5e4c000/0x0/0x4ffc00000, data 0x3af151f/0x3d00000, compress 0x0/0x0/0x0, omap 0x417ca, meta 0x606e836), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 51650560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 ms_handle_reset con 0x56111b7a1400 session 0x561118d896c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x5611213e4400 session 0x56111f862380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x561118d18800 session 0x56111ab3efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x56111b7a0400 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 51642368 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x561118cfd400 session 0x56111b55f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x561118d18800 session 0x56111b55e380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 51642368 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 376 ms_handle_reset con 0x56111b7a0400 session 0x56111b594000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 377 ms_handle_reset con 0x56111b7a1400 session 0x56111f863880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 153329664 unmapped: 51634176 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.584274292s of 10.083294868s, submitted: 80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 378 ms_handle_reset con 0x5611213e4400 session 0x56111e5cea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f5e48000/0x0/0x4ffc00000, data 0x3af4df2/0x3d02000, compress 0x0/0x0/0x0, omap 0x41a7a, meta 0x606e586), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 50585600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f5e43000/0x0/0x4ffc00000, data 0x3af68c5/0x3d05000, compress 0x0/0x0/0x0, omap 0x41b66, meta 0x606e49a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2536786 data_alloc: 218103808 data_used: 4760780
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f5e43000/0x0/0x4ffc00000, data 0x3af68c5/0x3d05000, compress 0x0/0x0/0x0, omap 0x41b66, meta 0x606e49a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 50585600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 378 ms_handle_reset con 0x561118cfd400 session 0x56111ab3efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154378240 unmapped: 50585600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x561118d18800 session 0x56111f862000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 50577408 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111b7a1400 session 0x561119548e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111b7a0400 session 0x561119548700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111a4e8800 session 0x56111b51a8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 50577408 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111b7a1400 session 0x56111990ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 379 ms_handle_reset con 0x56111b7a0400 session 0x56111f862380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f5e43000/0x0/0x4ffc00000, data 0x3af84df/0x3d09000, compress 0x0/0x0/0x0, omap 0x415f6, meta 0x606ea0a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f5e43000/0x0/0x4ffc00000, data 0x3af84df/0x3d09000, compress 0x0/0x0/0x0, omap 0x415f6, meta 0x606ea0a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154411008 unmapped: 50552832 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 380 ms_handle_reset con 0x56111a4e8400 session 0x56111e5cf180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2554301 data_alloc: 218103808 data_used: 5017486
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9000 session 0x56111ab0afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154427392 unmapped: 50536448 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f5e38000/0x0/0x4ffc00000, data 0x3afbb94/0x3d10000, compress 0x0/0x0/0x0, omap 0x4c6c8, meta 0x6063938), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154427392 unmapped: 50536448 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9800 session 0x56111990e8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9400 session 0x56111a52ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e8400 session 0x56111990fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9000 session 0x561119511340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e9800 session 0x56111b7b1180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 154959872 unmapped: 50003968 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111b7a0400 session 0x56111f8628c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f59e5000/0x0/0x4ffc00000, data 0x3f50c58/0x4167000, compress 0x0/0x0/0x0, omap 0x4c923, meta 0x60636dd), peers [0,1] op hist [0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111b7a0400 session 0x5611187a6000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 ms_handle_reset con 0x56111a4e8400 session 0x561118b876c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f59e5000/0x0/0x4ffc00000, data 0x3f50c58/0x4167000, compress 0x0/0x0/0x0, omap 0x4c923, meta 0x60636dd), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155000832 unmapped: 49963008 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 381 handle_osd_map epochs [381,382], i have 382, src has [1,382]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.685203552s of 10.008099556s, submitted: 109
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 382 ms_handle_reset con 0x56111a4e9000 session 0x56111990f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 382 ms_handle_reset con 0x56111a4e9400 session 0x56111b55f180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 49954816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2596007 data_alloc: 218103808 data_used: 6243897
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 49954816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 382 ms_handle_reset con 0x56111a4e9800 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 382 ms_handle_reset con 0x56111a4e9000 session 0x56111e5cfdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 49954816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 383 ms_handle_reset con 0x56111a4e9400 session 0x56111b51c000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 383 ms_handle_reset con 0x56111a4e8400 session 0x56111b64fdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 383 heartbeat osd_stat(store_statfs(0x4f59e1000/0x0/0x4ffc00000, data 0x3f52675/0x4169000, compress 0x0/0x0/0x0, omap 0x4cea6, meta 0x606315a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 156082176 unmapped: 48881664 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 383 ms_handle_reset con 0x56111b7a1400 session 0x56111a5da700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 156090368 unmapped: 48873472 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e9c00 session 0x56111990e380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111b7a0400 session 0x56111b595180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 156139520 unmapped: 48824320 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e8400 session 0x561119548a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2600976 data_alloc: 218103808 data_used: 6334426
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166731776 unmapped: 38232064 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e9000 session 0x56111b55e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e9400 session 0x56111a5dae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 36945920 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111b7a1400 session 0x56111b195880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168853504 unmapped: 36110336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e8400 session 0x56111990e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 ms_handle_reset con 0x56111a4e9000 session 0x56111b195a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f5306000/0x0/0x4ffc00000, data 0x429ed91/0x44b6000, compress 0x0/0x0/0x0, omap 0x4d7e8, meta 0x6062818), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 36102144 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 384 handle_osd_map epochs [385,385], i have 385, src has [1,385]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 385 ms_handle_reset con 0x56111b7a1400 session 0x56111b64f500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.623124123s of 10.003363609s, submitted: 199
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163094528 unmapped: 41869312 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111b7bd000 session 0x56111e5cf880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111b7bd800 session 0x56111981ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2675304 data_alloc: 234881024 data_used: 12639194
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163414016 unmapped: 41549824 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111a4e8400 session 0x56111a52a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111a4e9000 session 0x56111f862540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163487744 unmapped: 41476096 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 386 heartbeat osd_stat(store_statfs(0x4f568a000/0x0/0x4ffc00000, data 0x42a23fb/0x44be000, compress 0x0/0x0/0x0, omap 0x4e275, meta 0x6061d8b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163487744 unmapped: 41476096 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111b7a1400 session 0x56111f8628c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 386 ms_handle_reset con 0x56111b7bd800 session 0x56111b51c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163684352 unmapped: 41279488 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 387 ms_handle_reset con 0x56111b7bc400 session 0x56111a5dbc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 387 ms_handle_reset con 0x56111b7bd000 session 0x56111b55e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 387 ms_handle_reset con 0x56111b7bc400 session 0x56111b595dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2677018 data_alloc: 234881024 data_used: 12639194
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 387 heartbeat osd_stat(store_statfs(0x4f5689000/0x0/0x4ffc00000, data 0x42a3feb/0x44c1000, compress 0x0/0x0/0x0, omap 0x4e7e6, meta 0x606181a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163733504 unmapped: 41230336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x56111a4e8400 session 0x56111a52ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x561118cfd400 session 0x56111b194c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x561118d18800 session 0x561119832e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 163635200 unmapped: 41328640 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x561118cfd400 session 0x56111b39c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f5688000/0x0/0x4ffc00000, data 0x42a5ba3/0x44c4000, compress 0x0/0x0/0x0, omap 0x4ecec, meta 0x6061314), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x561118d18800 session 0x56111ab0afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 38739968 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x56111a4e9000 session 0x56111e5ce8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 ms_handle_reset con 0x56111b7a1400 session 0x561119511340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.728258133s of 10.006414413s, submitted: 167
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167690240 unmapped: 37273600 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2719929 data_alloc: 234881024 data_used: 13741018
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f515f000/0x0/0x4ffc00000, data 0x47cb622/0x49eb000, compress 0x0/0x0/0x0, omap 0x4f464, meta 0x6060b9c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2719929 data_alloc: 234881024 data_used: 13741018
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 37216256 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167755776 unmapped: 37208064 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f515f000/0x0/0x4ffc00000, data 0x47cb622/0x49eb000, compress 0x0/0x0/0x0, omap 0x4f464, meta 0x6060b9c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111b7bd800 session 0x561119548e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111a4e9400 session 0x56111b594540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111b7a0400 session 0x561119510c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.809198380s of 10.028366089s, submitted: 49
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x561118cfd400 session 0x56111ac5ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x561118d18800 session 0x56111ae59c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111b7a1400 session 0x56111b64efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111a4e9000 session 0x561118ca61c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x561118cfd400 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2721301 data_alloc: 234881024 data_used: 13745130
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x561118d18800 session 0x561118d88c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f515d000/0x0/0x4ffc00000, data 0x47cd1be/0x49ee000, compress 0x0/0x0/0x0, omap 0x4f65c, meta 0x60609a4), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 37199872 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 ms_handle_reset con 0x56111b7a0400 session 0x56111b7b0c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 391 ms_handle_reset con 0x56111a4e9400 session 0x56111b594000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x561118cfd400 session 0x56111a52aa80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f515a000/0x0/0x4ffc00000, data 0x47ced7b/0x49ef000, compress 0x0/0x0/0x0, omap 0x4fb06, meta 0x60604fa), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x561118d18800 session 0x56111990ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x56111a4e9000 session 0x56111990e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x56111b7a0400 session 0x56111e5ce540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2730363 data_alloc: 234881024 data_used: 13749175
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x56111b7bcc00 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x561118d18800 session 0x56111b64e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x561118cfd400 session 0x561118ca61c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 ms_handle_reset con 0x56111a4e9000 session 0x56111b594540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f5153000/0x0/0x4ffc00000, data 0x47d0966/0x49f4000, compress 0x0/0x0/0x0, omap 0x4fcff, meta 0x6060301), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167788544 unmapped: 37175296 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 393 ms_handle_reset con 0x56111b7a0400 session 0x561119511340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167796736 unmapped: 37167104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2739773 data_alloc: 234881024 data_used: 13900180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 394 ms_handle_reset con 0x56111a6b2800 session 0x56111b195880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167796736 unmapped: 37167104 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 394 ms_handle_reset con 0x56111a6b2800 session 0x56111b55e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.383215904s of 11.524516106s, submitted: 64
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 394 ms_handle_reset con 0x561118cfd400 session 0x56111a5dbc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 394 ms_handle_reset con 0x56111a4e9000 session 0x56111b7b01c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 395 ms_handle_reset con 0x56111a6b3c00 session 0x56111b7b0700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f5151000/0x0/0x4ffc00000, data 0x47d3fc0/0x49fb000, compress 0x0/0x0/0x0, omap 0x5071d, meta 0x605f8e3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2747580 data_alloc: 234881024 data_used: 13973703
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 395 ms_handle_reset con 0x56111a6b2c00 session 0x56111a5db180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 395 ms_handle_reset con 0x561118cfd400 session 0x56111b39d340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 395 ms_handle_reset con 0x56111a4e9000 session 0x561119511dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f514b000/0x0/0x4ffc00000, data 0x47d5b7f/0x49ff000, compress 0x0/0x0/0x0, omap 0x50891, meta 0x605f76f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 37158912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a6b2800 session 0x56111b7b1340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a6b3000 session 0x56111f863500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a6b3c00 session 0x56111e5cfdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f5147000/0x0/0x4ffc00000, data 0x47d775d/0x4a03000, compress 0x0/0x0/0x0, omap 0x50f80, meta 0x605f080), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2768378 data_alloc: 234881024 data_used: 15890154
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x561118cfd400 session 0x56111ab0b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a4e9000 session 0x56111ae58000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.716024399s of 10.781334877s, submitted: 42
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168050688 unmapped: 36913152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 396 ms_handle_reset con 0x56111a6b2800 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 397 ms_handle_reset con 0x56111a6b3000 session 0x56111a52a1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168214528 unmapped: 36749312 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x47d775d/0x4a03000, compress 0x0/0x0/0x0, omap 0x50f80, meta 0x605f080), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 397 ms_handle_reset con 0x56111b7a5c00 session 0x56111e5ce8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 36708352 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 397 ms_handle_reset con 0x561118cfd400 session 0x56111ab3ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 36700160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 398 ms_handle_reset con 0x56111a4e9000 session 0x56111b51d6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2779814 data_alloc: 234881024 data_used: 16459479
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 398 ms_handle_reset con 0x56111a6b2800 session 0x56111ab0afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 398 ms_handle_reset con 0x56111a6b3000 session 0x56111981b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f5146000/0x0/0x4ffc00000, data 0x47daee4/0x4a06000, compress 0x0/0x0/0x0, omap 0x51851, meta 0x605e7af), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 398 handle_osd_map epochs [398,399], i have 399, src has [1,399]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f5141000/0x0/0x4ffc00000, data 0x47dc99b/0x4a09000, compress 0x0/0x0/0x0, omap 0x51a3b, meta 0x605e5c5), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2780659 data_alloc: 234881024 data_used: 16460689
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 399 ms_handle_reset con 0x56111b7a1000 session 0x56111b51d340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168329216 unmapped: 36634624 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.611611366s of 10.825484276s, submitted: 107
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 399 ms_handle_reset con 0x561118cfd400 session 0x56111b55fc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 36618240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 36618240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 36610048 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 400 ms_handle_reset con 0x56111a4e9000 session 0x56111aaac000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 400 ms_handle_reset con 0x56111a6b2800 session 0x56111e156e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2790394 data_alloc: 234881024 data_used: 16456691
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168427520 unmapped: 36536320 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x56111a6b3000 session 0x561118b87a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f513d000/0x0/0x4ffc00000, data 0x47de5a9/0x4a0e000, compress 0x0/0x0/0x0, omap 0x5249f, meta 0x605db61), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x561118d18000 session 0x561119511340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168460288 unmapped: 36503552 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x561118cfd400 session 0x56111b39cc40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f5138000/0x0/0x4ffc00000, data 0x47e0199/0x4a11000, compress 0x0/0x0/0x0, omap 0x5299a, meta 0x605d666), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168460288 unmapped: 36503552 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x56111a4e9000 session 0x56111b7b1dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 401 ms_handle_reset con 0x56111a6b2800 session 0x56111b55f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168558592 unmapped: 36405248 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168583168 unmapped: 36380672 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2792968 data_alloc: 234881024 data_used: 16457178
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168583168 unmapped: 36380672 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168583168 unmapped: 36380672 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f5138000/0x0/0x4ffc00000, data 0x47e1ba6/0x4a12000, compress 0x0/0x0/0x0, omap 0x52dec, meta 0x605d214), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x56111a6b3000 session 0x561118b86540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168689664 unmapped: 36274176 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x56111b7bd400 session 0x561118b86000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x56111b7bc800 session 0x56111b594a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x561118cfd400 session 0x56111f863880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 36159488 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.948968887s of 12.101161003s, submitted: 98
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 402 ms_handle_reset con 0x56111a4e9000 session 0x56111b194700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 36159488 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 403 ms_handle_reset con 0x56111a6b2800 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f5134000/0x0/0x4ffc00000, data 0x47e3752/0x4a16000, compress 0x0/0x0/0x0, omap 0x52fe7, meta 0x605d019), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2802574 data_alloc: 234881024 data_used: 17395162
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 36134912 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 404 ms_handle_reset con 0x56111a6b3000 session 0x56111990ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168845312 unmapped: 36118528 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168853504 unmapped: 36110336 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 404 ms_handle_reset con 0x561118cfd400 session 0x56111b51cfc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f512e000/0x0/0x4ffc00000, data 0x47e5350/0x4a1a000, compress 0x0/0x0/0x0, omap 0x5341d, meta 0x605cbe3), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 404 ms_handle_reset con 0x56111a6b2800 session 0x56111b51c700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 404 ms_handle_reset con 0x56111a4e9000 session 0x56111b7b1500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166928384 unmapped: 38035456 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 405 ms_handle_reset con 0x56111a4ed800 session 0x561118d89180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 405 ms_handle_reset con 0x56111b79a000 session 0x56111b51b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166936576 unmapped: 38027264 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 405 handle_osd_map epochs [405,406], i have 406, src has [1,406]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 406 ms_handle_reset con 0x56111b79bc00 session 0x56111b55e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 406 ms_handle_reset con 0x56111b7bc800 session 0x56111990f500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2529523 data_alloc: 234881024 data_used: 10969052
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f80b2000/0x0/0x4ffc00000, data 0x185fb3c/0x1a98000, compress 0x0/0x0/0x0, omap 0x54040, meta 0x605bfc0), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166944768 unmapped: 38019072 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 406 ms_handle_reset con 0x56111a4e9000 session 0x56111b1948c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 407 ms_handle_reset con 0x561118cfd400 session 0x56111f862a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166944768 unmapped: 38019072 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166961152 unmapped: 38002688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 407 ms_handle_reset con 0x56111a6b2800 session 0x561119549340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f80ad000/0x0/0x4ffc00000, data 0x18616da/0x1a9b000, compress 0x0/0x0/0x0, omap 0x545e9, meta 0x605ba17), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 408 ms_handle_reset con 0x561118cfd400 session 0x56111e5ce380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166969344 unmapped: 37994496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 408 ms_handle_reset con 0x56111a4e9000 session 0x56111b51d6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 409 ms_handle_reset con 0x56111a4ed800 session 0x56111f862000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.935689926s of 10.119892120s, submitted: 98
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x56111b7bc800 session 0x56111b51c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x56111b79bc00 session 0x56111a52b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x56111c913000 session 0x561118b868c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2542153 data_alloc: 234881024 data_used: 10969539
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168198144 unmapped: 36765696 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x561118cfd400 session 0x56111981ac40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168206336 unmapped: 36757504 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168206336 unmapped: 36757504 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 410 ms_handle_reset con 0x56111a4ed800 session 0x56111b5941c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 410 heartbeat osd_stat(store_statfs(0x4f80a3000/0x0/0x4ffc00000, data 0x1866bd3/0x1aa3000, compress 0x0/0x0/0x0, omap 0x55711, meta 0x605a8ef), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 36732928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 411 ms_handle_reset con 0x56111b7bc800 session 0x56111ac5b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x56111af3d000 session 0x561119510a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x56111a4e9000 session 0x56111ae59500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f809e000/0x0/0x4ffc00000, data 0x186a415/0x1aaa000, compress 0x0/0x0/0x0, omap 0x560b1, meta 0x6059f4f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 36700160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x561118cfd400 session 0x5611187a6a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x56111a4ed800 session 0x561119833a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2550467 data_alloc: 234881024 data_used: 10973733
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 36700160 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 412 ms_handle_reset con 0x56111b7bc800 session 0x561118ca6540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f809e000/0x0/0x4ffc00000, data 0x186a415/0x1aaa000, compress 0x0/0x0/0x0, omap 0x560b1, meta 0x6059f4f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 36732928 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 413 ms_handle_reset con 0x56111c913000 session 0x56111ab0bc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 413 ms_handle_reset con 0x561118cfd400 session 0x56111b55e8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 36675584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 413 ms_handle_reset con 0x56111a4e9000 session 0x56111990f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 36675584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f809c000/0x0/0x4ffc00000, data 0x186c067/0x1aae000, compress 0x0/0x0/0x0, omap 0x56319, meta 0x6059ce7), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 36675584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f809c000/0x0/0x4ffc00000, data 0x186c067/0x1aae000, compress 0x0/0x0/0x0, omap 0x56319, meta 0x6059ce7), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.585222244s of 10.777623177s, submitted: 108
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 413 ms_handle_reset con 0x56111a4ed800 session 0x56111e5cf180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2556916 data_alloc: 234881024 data_used: 10973831
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168321024 unmapped: 36642816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 414 ms_handle_reset con 0x56111b335800 session 0x56111b55e380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 415 ms_handle_reset con 0x561119816c00 session 0x56111b39d880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 415 ms_handle_reset con 0x56111b7bc800 session 0x56111981b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168337408 unmapped: 36626432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 416 ms_handle_reset con 0x561118cfd400 session 0x561118b861c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 416 ms_handle_reset con 0x56111a4e9000 session 0x56111b39ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 36618240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 416 ms_handle_reset con 0x56111a4ed800 session 0x56111b7b1880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168345600 unmapped: 36618240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f808d000/0x0/0x4ffc00000, data 0x1871439/0x1ab9000, compress 0x0/0x0/0x0, omap 0x56da4, meta 0x605925c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168353792 unmapped: 36610048 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 417 ms_handle_reset con 0x56111b7ae400 session 0x56111ab0ac40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 417 ms_handle_reset con 0x56111b79e000 session 0x56111b7b0700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2573195 data_alloc: 234881024 data_used: 10974362
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 417 ms_handle_reset con 0x561118cfd400 session 0x56111b55efc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168615936 unmapped: 36347904 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 418 ms_handle_reset con 0x56111a4e9000 session 0x56111e156fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 418 ms_handle_reset con 0x56111a4ed800 session 0x561118b86380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 418 ms_handle_reset con 0x56111b335800 session 0x56111ab3ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168624128 unmapped: 36339712 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 419 ms_handle_reset con 0x561118cfd400 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 419 ms_handle_reset con 0x56111a4e9000 session 0x56111b55ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173244416 unmapped: 31719424 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f7bba000/0x0/0x4ffc00000, data 0x1d43afd/0x1f90000, compress 0x0/0x0/0x0, omap 0x57112, meta 0x6058eee), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173301760 unmapped: 31662080 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 420 ms_handle_reset con 0x56111a4ed800 session 0x561119511180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 34856960 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2613030 data_alloc: 234881024 data_used: 12158890
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.196111679s of 10.401690483s, submitted: 139
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170115072 unmapped: 34848768 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 421 ms_handle_reset con 0x56111b79e000 session 0x56111e5cfa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170123264 unmapped: 34840576 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f7bb8000/0x0/0x4ffc00000, data 0x1d48cfc/0x1f92000, compress 0x0/0x0/0x0, omap 0x57961, meta 0x605869f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170139648 unmapped: 34824192 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f7bb8000/0x0/0x4ffc00000, data 0x1d48cfc/0x1f92000, compress 0x0/0x0/0x0, omap 0x57961, meta 0x605869f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 422 ms_handle_reset con 0x56111b7bc800 session 0x56111a52bdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169304064 unmapped: 35659776 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 424 ms_handle_reset con 0x561118cfd400 session 0x5611187a7180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 424 ms_handle_reset con 0x56111a4e9000 session 0x561119511dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169312256 unmapped: 35651584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2623198 data_alloc: 234881024 data_used: 12160489
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169312256 unmapped: 35651584 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 35643392 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 35643392 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f7bad000/0x0/0x4ffc00000, data 0x1d4e14e/0x1f99000, compress 0x0/0x0/0x0, omap 0x5877b, meta 0x6057885), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2649216 data_alloc: 234881024 data_used: 12135913
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.967669487s of 12.112925529s, submitted: 80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 424 ms_handle_reset con 0x56111a4ed800 session 0x56111981a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f7bae000/0x0/0x4ffc00000, data 0x205914e/0x1f9e000, compress 0x0/0x0/0x0, omap 0x58aa4, meta 0x605755c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169361408 unmapped: 35602432 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 425 ms_handle_reset con 0x56111af3dc00 session 0x56111b7b1dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 426 ms_handle_reset con 0x56111b79e000 session 0x56111b194e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170450944 unmapped: 34512896 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2657200 data_alloc: 234881024 data_used: 12140009
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 34480128 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 427 ms_handle_reset con 0x561118cfd400 session 0x561118d88380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170516480 unmapped: 34447360 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f7b9f000/0x0/0x4ffc00000, data 0x205e51e/0x1fa7000, compress 0x0/0x0/0x0, omap 0x597cc, meta 0x6056834), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171057152 unmapped: 33906688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111a4e9000 session 0x56111aaac000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111a4ed800 session 0x56111f863180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111af3dc00 session 0x56111b195dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111af3c800 session 0x56111b7b0c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x561118cfd400 session 0x56111b39da40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f7800000/0x0/0x4ffc00000, data 0x240012a/0x234a000, compress 0x0/0x0/0x0, omap 0x59be4, meta 0x605641c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f7800000/0x0/0x4ffc00000, data 0x240012a/0x234a000, compress 0x0/0x0/0x0, omap 0x59be4, meta 0x605641c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2697314 data_alloc: 234881024 data_used: 12140009
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f7800000/0x0/0x4ffc00000, data 0x240012a/0x234a000, compress 0x0/0x0/0x0, omap 0x59be4, meta 0x605641c), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111a4e9000 session 0x56111aaad180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111a4ed800 session 0x561118d89dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 33898496 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111af3dc00 session 0x56111b51c000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.563882828s of 10.821042061s, submitted: 94
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 ms_handle_reset con 0x56111984dc00 session 0x5611187a7500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170975232 unmapped: 33988608 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f77d1000/0x0/0x4ffc00000, data 0x242bc14/0x2379000, compress 0x0/0x0/0x0, omap 0x59e42, meta 0x60561be), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170999808 unmapped: 33964032 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111b334800 session 0x56111b51d6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111b7aec00 session 0x561119511c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2724221 data_alloc: 234881024 data_used: 15364585
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111a4ed800 session 0x56111981ac40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 33644544 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 33644544 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171319296 unmapped: 33644544 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111af3dc00 session 0x56111b195c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111a6b2400 session 0x56111e5cfdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171376640 unmapped: 33587200 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f7d11000/0x0/0x4ffc00000, data 0x1eedc14/0x1e3b000, compress 0x0/0x0/0x0, omap 0x59e6f, meta 0x6056191), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 ms_handle_reset con 0x56111a4ed800 session 0x56111981ae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171376640 unmapped: 33587200 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2676901 data_alloc: 234881024 data_used: 15323625
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 33570816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 33570816 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 430 ms_handle_reset con 0x561118d18800 session 0x56111b64e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 430 ms_handle_reset con 0x56111b7a0400 session 0x56111f863a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 430 ms_handle_reset con 0x56111a6b2400 session 0x56111b7b1a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171417600 unmapped: 33546240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.976639748s of 10.126955986s, submitted: 81
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 430 ms_handle_reset con 0x56111af3dc00 session 0x56111a52a8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171417600 unmapped: 33546240 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 33529856 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f7891000/0x0/0x4ffc00000, data 0x205c2e5/0x22b3000, compress 0x0/0x0/0x0, omap 0x5a799, meta 0x6055867), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 432 ms_handle_reset con 0x561118d18800 session 0x56111a52a540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2720136 data_alloc: 234881024 data_used: 15517357
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 172007424 unmapped: 32956416 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 432 ms_handle_reset con 0x56111a4ed800 session 0x56111b39c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 432 ms_handle_reset con 0x56111a6b2400 session 0x56111b194700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167165952 unmapped: 37797888 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 433 ms_handle_reset con 0x56111b7a0400 session 0x561119548380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f7898000/0x0/0x4ffc00000, data 0x16e4e1f/0x193c000, compress 0x0/0x0/0x0, omap 0x5a910, meta 0x60556f0), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167231488 unmapped: 37732352 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 433 ms_handle_reset con 0x56111b334800 session 0x561118ca6e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166567936 unmapped: 38395904 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f820a000/0x0/0x4ffc00000, data 0x16ea9ad/0x1942000, compress 0x0/0x0/0x0, omap 0x5ad7d, meta 0x6055283), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166567936 unmapped: 38395904 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 433 ms_handle_reset con 0x561118d18800 session 0x56111b55e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2613050 data_alloc: 218103808 data_used: 8288745
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 166567936 unmapped: 38395904 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 434 ms_handle_reset con 0x56111a4ed800 session 0x56111b51c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167624704 unmapped: 37339136 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 434 ms_handle_reset con 0x56111a6b2400 session 0x56111b64fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167624704 unmapped: 37339136 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 435 ms_handle_reset con 0x56111b7a0400 session 0x56111a52b880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.713693619s of 10.039623260s, submitted: 111
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167632896 unmapped: 37330944 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 436 ms_handle_reset con 0x56111b7aec00 session 0x56111b195180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 436 ms_handle_reset con 0x561118d18800 session 0x56111ac5ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f81fe000/0x0/0x4ffc00000, data 0x16efc52/0x194c000, compress 0x0/0x0/0x0, omap 0x5b552, meta 0x6054aae), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2623231 data_alloc: 218103808 data_used: 8289358
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 437 ms_handle_reset con 0x56111a6b2400 session 0x561118d881c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 438 ms_handle_reset con 0x56111b7a0400 session 0x56111e5cfc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f81f5000/0x0/0x4ffc00000, data 0x16f33b8/0x1953000, compress 0x0/0x0/0x0, omap 0x5b844, meta 0x60547bc), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167649280 unmapped: 37314560 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 438 ms_handle_reset con 0x56111b4a7c00 session 0x56111981ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2635617 data_alloc: 218103808 data_used: 8289943
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111b7aec00 session 0x561118d88c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167862272 unmapped: 37101568 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111a6b2400 session 0x56111ac5ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111b4a7c00 session 0x56111b51c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111b7a0400 session 0x56111b194700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 439 ms_handle_reset con 0x56111ac2a400 session 0x56111b39c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 440 ms_handle_reset con 0x561118d18800 session 0x56111b39c8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 440 ms_handle_reset con 0x561118d18800 session 0x56111a5dba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 37085184 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 440 ms_handle_reset con 0x56111a6b2400 session 0x56111f8621c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 441 ms_handle_reset con 0x56111ac2a400 session 0x56111f863a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167895040 unmapped: 37068800 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f81ce000/0x0/0x4ffc00000, data 0x1711cd0/0x197a000, compress 0x0/0x0/0x0, omap 0x5bf65, meta 0x605409b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 441 ms_handle_reset con 0x56111b7a0400 session 0x56111ae58fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.896823883s of 10.002335548s, submitted: 70
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 442 ms_handle_reset con 0x56111b4a7c00 session 0x561118b868c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 442 ms_handle_reset con 0x5611197c0000 session 0x56111b64f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167919616 unmapped: 37044224 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 442 ms_handle_reset con 0x5611197c0400 session 0x561118d88380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 443 ms_handle_reset con 0x561118d18800 session 0x56111e5ce8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167936000 unmapped: 37027840 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 443 ms_handle_reset con 0x56111ac2a400 session 0x56111b195c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 443 ms_handle_reset con 0x56111b4a7c00 session 0x56111b64e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2658607 data_alloc: 218103808 data_used: 8291714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x56111a6b2400 session 0x56111b7b1500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167976960 unmapped: 36986880 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x561118d18800 session 0x56111a52afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167976960 unmapped: 36986880 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x56111a4ed800 session 0x56111a5da700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x5611197c0000 session 0x56111e157180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x5611197c0400 session 0x56111990e380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f81c2000/0x0/0x4ffc00000, data 0x1718bae/0x1984000, compress 0x0/0x0/0x0, omap 0x5cf57, meta 0x60530a9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167976960 unmapped: 36986880 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x5611197c0000 session 0x561119549c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 ms_handle_reset con 0x56111a4ed800 session 0x56111990e8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 445 ms_handle_reset con 0x561118d18800 session 0x56111e5cea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167985152 unmapped: 36978688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 445 ms_handle_reset con 0x56111a6b2400 session 0x56111b51ce00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 167985152 unmapped: 36978688 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 446 ms_handle_reset con 0x56111ac2a400 session 0x56111e5ce540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2664043 data_alloc: 218103808 data_used: 8291926
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 447 ms_handle_reset con 0x561118d18800 session 0x56111a5dae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2401.1 total, 600.0 interval#012Cumulative writes: 19K writes, 75K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 6579 syncs, 2.95 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9480 writes, 34K keys, 9480 commit groups, 1.0 writes per commit group, ingest: 32.94 MB, 0.05 MB/s#012Interval WAL: 9480 writes, 3925 syncs, 2.42 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168001536 unmapped: 36962304 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 447 ms_handle_reset con 0x5611197c0000 session 0x56111b39da40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 447 ms_handle_reset con 0x56111a4ed800 session 0x56111b51c000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 447 ms_handle_reset con 0x56111b7a0400 session 0x56111a5da000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 36937728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f81bf000/0x0/0x4ffc00000, data 0x171de92/0x1989000, compress 0x0/0x0/0x0, omap 0x5d6c9, meta 0x6052937), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 36937728 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.854352951s of 10.016167641s, submitted: 96
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x56111a6b2400 session 0x56111b64fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670935 data_alloc: 218103808 data_used: 8394212
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x5611197c0000 session 0x56111b64ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x561118d18800 session 0x56111ae59880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f81b1000/0x0/0x4ffc00000, data 0x172b9b3/0x1999000, compress 0x0/0x0/0x0, omap 0x5d92a, meta 0x60526d6), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 36929536 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 36921344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f81b1000/0x0/0x4ffc00000, data 0x172b9b3/0x1999000, compress 0x0/0x0/0x0, omap 0x5d92a, meta 0x60526d6), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 36921344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f81b1000/0x0/0x4ffc00000, data 0x172b9b3/0x1999000, compress 0x0/0x0/0x0, omap 0x5d92a, meta 0x60526d6), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2672641 data_alloc: 218103808 data_used: 8394212
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 36921344 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168050688 unmapped: 36913152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168050688 unmapped: 36913152 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x56111a6b2400 session 0x56111aaac000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x56111a4ed800 session 0x56111b7b0c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168058880 unmapped: 36904960 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168058880 unmapped: 36904960 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f81b2000/0x0/0x4ffc00000, data 0x172b9c2/0x199a000, compress 0x0/0x0/0x0, omap 0x5d92a, meta 0x60526d6), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2673064 data_alloc: 218103808 data_used: 8394212
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x56111b7a0400 session 0x56111990fc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168058880 unmapped: 36904960 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.842747688s of 12.961668015s, submitted: 21
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 ms_handle_reset con 0x5611197c0000 session 0x56111a5db6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168075264 unmapped: 36888576 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 449 ms_handle_reset con 0x56111a4ed800 session 0x56111b64fc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 449 ms_handle_reset con 0x56111b79d800 session 0x56111b39c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168083456 unmapped: 36880384 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a6b2400 session 0x56111b7b1a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111b7a9400 session 0x56111a5dba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x561118d18800 session 0x56111e5cfdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168099840 unmapped: 36864000 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x5611197c0000 session 0x56111990f500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168099840 unmapped: 36864000 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2685888 data_alloc: 218103808 data_used: 8398421
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111b79d800 session 0x56111a52b880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a6b2400 session 0x56111ae58fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168108032 unmapped: 36855808 heap: 204963840 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f81a4000/0x0/0x4ffc00000, data 0x172f6af/0x19a3000, compress 0x0/0x0/0x0, omap 0x5dfb1, meta 0x605204f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168140800 unmapped: 41025536 heap: 209166336 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 172466176 unmapped: 40902656 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173572096 unmapped: 39796736 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169443328 unmapped: 43925504 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4162797 data_alloc: 218103808 data_used: 8398421
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174800896 unmapped: 38567936 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a6b3400 session 0x56111e5ce8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x561118d18800 session 0x56111e157180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a4ed800 session 0x56111e5ce380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x5611197c0000 session 0x56111f862a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 heartbeat osd_stat(store_statfs(0x4e61a9000/0x0/0x4ffc00000, data 0x1372f6af/0x139a3000, compress 0x0/0x0/0x0, omap 0x5dfec, meta 0x6052014), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.223875999s of 10.006100655s, submitted: 99
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 42704896 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111b79d800 session 0x561119548a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 ms_handle_reset con 0x56111a4ec800 session 0x56111a52a8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 451 ms_handle_reset con 0x561118d18800 session 0x56111b51c000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 451 ms_handle_reset con 0x56111a6b2400 session 0x56111b51d6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 451 ms_handle_reset con 0x5611197c0000 session 0x56111990ea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170745856 unmapped: 42622976 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 451 ms_handle_reset con 0x56111a4ed800 session 0x56111b195a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170745856 unmapped: 42622976 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 452 ms_handle_reset con 0x56111b79d800 session 0x56111b194c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170770432 unmapped: 42598400 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 452 ms_handle_reset con 0x561118d18800 session 0x56111990e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 452 ms_handle_reset con 0x5611197c0000 session 0x56111b594540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231869 data_alloc: 218103808 data_used: 8398225
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170786816 unmapped: 42582016 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 453 ms_handle_reset con 0x56111a4ed800 session 0x56111b39dc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 454 heartbeat osd_stat(store_statfs(0x4e61ab000/0x0/0x4ffc00000, data 0x13728502/0x1399d000, compress 0x0/0x0/0x0, omap 0x5e7ee, meta 0x6051812), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 454 ms_handle_reset con 0x56111a6b2400 session 0x561118ca61c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 454 ms_handle_reset con 0x5611213e5000 session 0x56111a52aa80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170975232 unmapped: 42393600 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x56111b4a6000 session 0x56111f8621c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x56111b7a4400 session 0x561119549340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x561118d18800 session 0x56111a5dae00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 455 heartbeat osd_stat(store_statfs(0x4e61a7000/0x0/0x4ffc00000, data 0x1372a074/0x1399f000, compress 0x0/0x0/0x0, omap 0x5ecae, meta 0x6051352), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171008000 unmapped: 42360832 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x5611197c0000 session 0x56111f862380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 455 ms_handle_reset con 0x56111a4ed800 session 0x56111ac5ba40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171008000 unmapped: 42360832 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 455 handle_osd_map epochs [455,456], i have 456, src has [1,456]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 456 ms_handle_reset con 0x561118cfd400 session 0x56111a5dbc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 456 ms_handle_reset con 0x56111a4e9000 session 0x56111b55e000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 456 ms_handle_reset con 0x561118d18800 session 0x561119548700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 171057152 unmapped: 42311680 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 456 heartbeat osd_stat(store_statfs(0x4e61b3000/0x0/0x4ffc00000, data 0x137112c7/0x13987000, compress 0x0/0x0/0x0, omap 0x5f143, meta 0x6060ebd), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 456 ms_handle_reset con 0x5611197c0000 session 0x56111ae58000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 457 ms_handle_reset con 0x56111b7a4400 session 0x561119510c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4166254 data_alloc: 218103808 data_used: 4783976
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168443904 unmapped: 44924928 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 458 ms_handle_reset con 0x56111b4a6000 session 0x561118b86c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.774546623s of 10.111104012s, submitted: 152
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 458 ms_handle_reset con 0x561118cfd400 session 0x561119510700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168452096 unmapped: 44916736 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168452096 unmapped: 44916736 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 458 heartbeat osd_stat(store_statfs(0x4e57ed000/0x0/0x4ffc00000, data 0x12f419f9/0x131ba000, compress 0x0/0x0/0x0, omap 0x5f8d9, meta 0x71f0727), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168452096 unmapped: 44916736 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 458 ms_handle_reset con 0x5611197c0000 session 0x56111990e380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 458 ms_handle_reset con 0x561118d18800 session 0x56111b595dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168501248 unmapped: 53272576 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4452218 data_alloc: 218103808 data_used: 4788037
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 168632320 unmapped: 53141504 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 48766976 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 459 heartbeat osd_stat(store_statfs(0x4ddfef000/0x0/0x4ffc00000, data 0x1a743494/0x1a9bd000, compress 0x0/0x0/0x0, omap 0x5fac6, meta 0x71f053a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173416448 unmapped: 48357376 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173760512 unmapped: 48013312 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 460 heartbeat osd_stat(store_statfs(0x4d87ef000/0x0/0x4ffc00000, data 0x1ff43494/0x201bd000, compress 0x0/0x0/0x0, omap 0x5fac6, meta 0x71f053a), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 178241536 unmapped: 43532288 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5549172 data_alloc: 218103808 data_used: 4788622
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170033152 unmapped: 51740672 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 460 ms_handle_reset con 0x56111a4e9000 session 0x56111981ac40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 460 ms_handle_reset con 0x561118cfd400 session 0x56111b64fdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 460 ms_handle_reset con 0x56111a6b2400 session 0x56111e5ce1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170057728 unmapped: 51716096 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.319390774s of 10.310555458s, submitted: 88
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 461 ms_handle_reset con 0x561118d18800 session 0x56111b7b0000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 461 ms_handle_reset con 0x5611197c0000 session 0x56111b64ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 51675136 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 51666944 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 461 handle_osd_map epochs [461,462], i have 461, src has [1,462]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x56111b4a6000 session 0x561118b86fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x561118cfd400 session 0x56111b51b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170123264 unmapped: 51650560 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 462 heartbeat osd_stat(store_statfs(0x4d4be3000/0x0/0x4ffc00000, data 0x23b48201/0x23dc5000, compress 0x0/0x0/0x0, omap 0x60350, meta 0x71efcb0), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5618304 data_alloc: 218103808 data_used: 4788622
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 51642368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 51642368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x561118d18800 session 0x56111f863a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 462 heartbeat osd_stat(store_statfs(0x4d4be4000/0x0/0x4ffc00000, data 0x23b47d0f/0x23dc4000, compress 0x0/0x0/0x0, omap 0x60350, meta 0x71efcb0), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 51642368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x5611197c0000 session 0x56111b7b0a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 462 ms_handle_reset con 0x56111a6b2400 session 0x561118d896c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 51642368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170139648 unmapped: 51634176 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x5611197c1800 session 0x56111b64e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5625386 data_alloc: 218103808 data_used: 4788622
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 51609600 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x561118cfd400 session 0x56111f862000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170180608 unmapped: 51593216 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 heartbeat osd_stat(store_statfs(0x4d4bde000/0x0/0x4ffc00000, data 0x23b4b346/0x23dca000, compress 0x0/0x0/0x0, omap 0x6096a, meta 0x71ef696), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.685620308s of 10.746030807s, submitted: 36
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x561118d18800 session 0x56111b594540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x5611197c0000 session 0x56111b51c380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170213376 unmapped: 51560448 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x56111cfd0000 session 0x56111b51c000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x56111a6b2400 session 0x56111b51b6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 ms_handle_reset con 0x561118cfd400 session 0x56111b64f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 50888704 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 50888704 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 465 ms_handle_reset con 0x5611197c0000 session 0x56111b39cfc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5677570 data_alloc: 218103808 data_used: 4788622
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170917888 unmapped: 50855936 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 466 ms_handle_reset con 0x56111cfd0000 session 0x561118d89dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 172040192 unmapped: 49733632 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 466 handle_osd_map epochs [466,467], i have 466, src has [1,467]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 467 ms_handle_reset con 0x5611213e4c00 session 0x561119548a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169680896 unmapped: 52092928 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 467 heartbeat osd_stat(store_statfs(0x4d4216000/0x0/0x4ffc00000, data 0x24511b50/0x24794000, compress 0x0/0x0/0x0, omap 0x610d3, meta 0x71eef2d), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 467 ms_handle_reset con 0x561118d18800 session 0x56111a5db6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169680896 unmapped: 52092928 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169746432 unmapped: 52027392 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5707447 data_alloc: 218103808 data_used: 4789820
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d4212000/0x0/0x4ffc00000, data 0x24515187/0x2479a000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5707447 data_alloc: 218103808 data_used: 4789820
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d4212000/0x0/0x4ffc00000, data 0x24515187/0x2479a000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 51896320 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.674652100s of 15.477749825s, submitted: 186
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x561118cfd400 session 0x56111e5cea80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x5611197c0000 session 0x56111aaacfc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x56111cfd0000 session 0x56111b39c8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d4212000/0x0/0x4ffc00000, data 0x24515187/0x2479a000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x5611213e4c00 session 0x56111b64f500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x56111af97400 session 0x56111b51d6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5756096 data_alloc: 218103808 data_used: 4789820
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x561118cfd400 session 0x56111b7b0c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aaf000/0x0/0x4ffc00000, data 0x24c771e9/0x24efd000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x5611197c0000 session 0x56111b64e700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 51822592 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x56111af97400 session 0x561119549c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 ms_handle_reset con 0x56111cfd0000 session 0x56111b595a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169984000 unmapped: 51789824 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 169992192 unmapped: 51781632 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5803759 data_alloc: 234881024 data_used: 12329532
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 51109888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 51109888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 51109888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.975400925s of 10.155759811s, submitted: 36
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111ac2a000 session 0x56111e157340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d3aad000/0x0/0x4ffc00000, data 0x24c7721c/0x24eff000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 50970624 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170803200 unmapped: 50970624 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x561118cfd400 session 0x56111b7b1340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5810534 data_alloc: 234881024 data_used: 12333628
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170819584 unmapped: 50954240 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170819584 unmapped: 50954240 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170819584 unmapped: 50954240 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d3aa6000/0x0/0x4ffc00000, data 0x24c78e2b/0x24f04000, compress 0x0/0x0/0x0, omap 0x61435, meta 0x71eebcb), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170827776 unmapped: 50946048 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 170827776 unmapped: 50946048 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5885364 data_alloc: 234881024 data_used: 12386364
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 173604864 unmapped: 48168960 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611197c0000 session 0x561119833a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111af97400 session 0x561118b86000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 45891584 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111cfd0000 session 0x56111b39d880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111b7a1800 session 0x56111b64fc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x561118cfd400 session 0x56111a52b340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174153728 unmapped: 47620096 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d2833000/0x0/0x4ffc00000, data 0x265d5e2b/0x26179000, compress 0x0/0x0/0x0, omap 0x613bb, meta 0x71eec45), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174153728 unmapped: 47620096 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174153728 unmapped: 47620096 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6010721 data_alloc: 234881024 data_used: 13717564
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174227456 unmapped: 47546368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174227456 unmapped: 47546368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.767096519s of 14.260155678s, submitted: 101
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611197c0000 session 0x56111ae58fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27f9000/0x0/0x4ffc00000, data 0x2660fe2b/0x261b3000, compress 0x0/0x0/0x0, omap 0x61093, meta 0x71eef6d), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174235648 unmapped: 47538176 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47374336 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 39010304 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6074030 data_alloc: 234881024 data_used: 23831812
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 39010304 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27d9000/0x0/0x4ffc00000, data 0x2662fe2b/0x261d3000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182796288 unmapped: 38977536 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27d9000/0x0/0x4ffc00000, data 0x2662fe2b/0x261d3000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182419456 unmapped: 39354368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182419456 unmapped: 39354368 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 39231488 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27cf000/0x0/0x4ffc00000, data 0x26639e2b/0x261dd000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6074430 data_alloc: 234881024 data_used: 23831812
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 39231488 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182542336 unmapped: 39231488 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27cf000/0x0/0x4ffc00000, data 0x26639e2b/0x261dd000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182575104 unmapped: 39198720 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.970912933s of 10.996688843s, submitted: 11
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d27cf000/0x0/0x4ffc00000, data 0x26639e2b/0x261dd000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x71eed63), peers [0,1] op hist [2,0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 32940032 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188956672 unmapped: 32817152 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6150228 data_alloc: 234881024 data_used: 24642308
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189841408 unmapped: 31932416 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189841408 unmapped: 31932416 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189841408 unmapped: 31932416 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611213e4c00 session 0x56111b64e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611197df000 session 0x56111f863500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111a4ed800 session 0x56111e5ce000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d0acb000/0x0/0x4ffc00000, data 0x2719de2b/0x26d41000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x838ed63), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189865984 unmapped: 31907840 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x56111a4ed800 session 0x56111b1941c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186572800 unmapped: 35201024 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d11ae000/0x0/0x4ffc00000, data 0x26a3bd96/0x265dc000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x838ed63), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6060219 data_alloc: 234881024 data_used: 17334020
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186572800 unmapped: 35201024 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d11ae000/0x0/0x4ffc00000, data 0x26a3bd96/0x265dc000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x838ed63), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 heartbeat osd_stat(store_statfs(0x4d11ae000/0x0/0x4ffc00000, data 0x26a3bd96/0x265dc000, compress 0x0/0x0/0x0, omap 0x6129d, meta 0x838ed63), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 35061760 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 ms_handle_reset con 0x5611197c0000 session 0x56111ab3ec40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 469 handle_osd_map epochs [469,470], i have 470, src has [1,470]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192995328 unmapped: 28778496 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 470 ms_handle_reset con 0x5611197df000 session 0x561118ca6a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.291953087s of 10.047514915s, submitted: 217
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 471 ms_handle_reset con 0x5611213e4c00 session 0x561118b86380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 471 ms_handle_reset con 0x561118cfd400 session 0x561119511dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189669376 unmapped: 32104448 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189415424 unmapped: 32358400 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 472 ms_handle_reset con 0x5611197c0000 session 0x561118ca6e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6152533 data_alloc: 234881024 data_used: 17342095
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189415424 unmapped: 32358400 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189431808 unmapped: 32342016 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 472 heartbeat osd_stat(store_statfs(0x4d091e000/0x0/0x4ffc00000, data 0x276220be/0x26eec000, compress 0x0/0x0/0x0, omap 0x61ddb, meta 0x838e225), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 heartbeat osd_stat(store_statfs(0x4d0920000/0x0/0x4ffc00000, data 0x276220be/0x26eec000, compress 0x0/0x0/0x0, omap 0x61ddb, meta 0x838e225), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 33267712 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 heartbeat osd_stat(store_statfs(0x4d0920000/0x0/0x4ffc00000, data 0x276220be/0x26eec000, compress 0x0/0x0/0x0, omap 0x61ddb, meta 0x838e225), peers [0,1] op hist [0,0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x56111a4ed800 session 0x56111b39ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188571648 unmapped: 33202176 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x5611197df000 session 0x56111e5ce8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 188571648 unmapped: 33202176 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x56111af97400 session 0x56111f862c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x56111cfd0000 session 0x56111b55f880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x5611213e4c00 session 0x56111e5ce700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5908357 data_alloc: 218103808 data_used: 7529517
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 ms_handle_reset con 0x5611197c0000 session 0x561118d88000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x5611197df000 session 0x5611187a7500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.231451988s of 10.000329971s, submitted: 103
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x56111af97400 session 0x56111b64fa40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x56111a4ed800 session 0x561118d88380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 heartbeat osd_stat(store_statfs(0x4d2a89000/0x0/0x4ffc00000, data 0x251d9824/0x24d81000, compress 0x0/0x0/0x0, omap 0x6189d, meta 0x838e763), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x56111af97400 session 0x56111b39c700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 181927936 unmapped: 39845888 heap: 221773824 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5880052 data_alloc: 218103808 data_used: 6124688
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 211329024 unmapped: 23044096 heap: 234373120 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 236560384 unmapped: 27205632 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 heartbeat osd_stat(store_statfs(0x4cf68b000/0x0/0x4ffc00000, data 0x285d9876/0x28181000, compress 0x0/0x0/0x0, omap 0x61929, meta 0x838e6d7), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186286080 unmapped: 77479936 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 heartbeat osd_stat(store_statfs(0x4cf68b000/0x0/0x4ffc00000, data 0x285d9876/0x28181000, compress 0x0/0x0/0x0, omap 0x61929, meta 0x838e6d7), peers [0,1] op hist [0,0,1,0,1,0,0,3])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x5611213e4c00 session 0x56111aaac000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183361536 unmapped: 80404480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 ms_handle_reset con 0x56111984d000 session 0x56111b195dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 474 handle_osd_map epochs [474,475], i have 475, src has [1,475]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184655872 unmapped: 79110144 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6693238 data_alloc: 218103808 data_used: 6125158
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 74571776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 heartbeat osd_stat(store_statfs(0x4c9e87000/0x0/0x4ffc00000, data 0x2d9db292/0x2d583000, compress 0x0/0x0/0x0, omap 0x61cc5, meta 0x838e33b), peers [0,1] op hist [0,0,0,2,0,0,0,0,2])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189497344 unmapped: 74268672 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 189743104 unmapped: 74022912 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.498979330s of 10.035424232s, submitted: 270
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 heartbeat osd_stat(store_statfs(0x4c5289000/0x0/0x4ffc00000, data 0x329db292/0x32583000, compress 0x0/0x0/0x0, omap 0x61cc5, meta 0x838e33b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186875904 unmapped: 76890112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 72253440 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197df000 session 0x561118d89a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197be800 session 0x56111ae58fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197c0000 session 0x56111990f180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197df000 session 0x56111b39da40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x56111984d000 session 0x56111b51c700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7576446 data_alloc: 218103808 data_used: 6125158
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187465728 unmapped: 76300288 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 heartbeat osd_stat(store_statfs(0x4bee88000/0x0/0x4ffc00000, data 0x38ddb77c/0x38984000, compress 0x0/0x0/0x0, omap 0x61cc5, meta 0x838e33b), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x56111af97400 session 0x56111b7b0700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611213e4c00 session 0x56111b51ce00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187613184 unmapped: 76152832 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197c0000 session 0x56111a52a1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x5611197df000 session 0x56111ae58c40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 ms_handle_reset con 0x56111984d000 session 0x56111a52a8c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187621376 unmapped: 76144640 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 476 ms_handle_reset con 0x56111af97400 session 0x56111990e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 476 heartbeat osd_stat(store_statfs(0x4d1289000/0x0/0x4ffc00000, data 0x251db220/0x24d81000, compress 0x0/0x0/0x0, omap 0x61d51, meta 0x838e2af), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187637760 unmapped: 76128256 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 476 ms_handle_reset con 0x56111984c400 session 0x56111e5cf500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 476 ms_handle_reset con 0x5611197c0000 session 0x5611187a6000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187637760 unmapped: 76128256 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5972731 data_alloc: 218103808 data_used: 6129156
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 476 heartbeat osd_stat(store_statfs(0x4d2a89000/0x0/0x4ffc00000, data 0x251dce00/0x24d83000, compress 0x0/0x0/0x0, omap 0x61e23, meta 0x838e1dd), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 187637760 unmapped: 76128256 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 476 handle_osd_map epochs [476,477], i have 477, src has [1,477]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 477 heartbeat osd_stat(store_statfs(0x4d2a89000/0x0/0x4ffc00000, data 0x251dce00/0x24d83000, compress 0x0/0x0/0x0, omap 0x61e23, meta 0x838e1dd), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 76906496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 478 ms_handle_reset con 0x5611197df000 session 0x561118ca61c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 478 ms_handle_reset con 0x56111984d000 session 0x56111b7b01c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 79134720 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 478 heartbeat osd_stat(store_statfs(0x4d3a16000/0x0/0x4ffc00000, data 0x23b63600/0x23df4000, compress 0x0/0x0/0x0, omap 0x62ca7, meta 0x838d359), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 79134720 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 478 handle_osd_map epochs [478,479], i have 478, src has [1,479]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.502179146s of 11.026341438s, submitted: 248
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 79134720 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 480 ms_handle_reset con 0x56111af97400 session 0x56111a52bdc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459256 data_alloc: 218103808 data_used: 4740612
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 480 heartbeat osd_stat(store_statfs(0x4d5e10000/0x0/0x4ffc00000, data 0x21366e20/0x215fa000, compress 0x0/0x0/0x0, omap 0x62ca7, meta 0x838d359), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 184115200 unmapped: 79650816 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 480 ms_handle_reset con 0x5611197c0800 session 0x56111e5cf6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 480 heartbeat osd_stat(store_statfs(0x4d5e10000/0x0/0x4ffc00000, data 0x21366e20/0x215fa000, compress 0x0/0x0/0x0, omap 0x62ca7, meta 0x838d359), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 480 handle_osd_map epochs [481,481], i have 481, src has [1,481]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 481 ms_handle_reset con 0x5611197c0000 session 0x56111b7b0540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f660c000/0x0/0x4ffc00000, data 0xf6a4aa/0x11fe000, compress 0x0/0x0/0x0, omap 0x630b7, meta 0x838cf49), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002299 data_alloc: 218103808 data_used: 4742436
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f660c000/0x0/0x4ffc00000, data 0xf6a4aa/0x11fe000, compress 0x0/0x0/0x0, omap 0x630b7, meta 0x838cf49), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 483 ms_handle_reset con 0x5611197df000 session 0x56111b7b1880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 81518592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.828944206s of 11.260063171s, submitted: 204
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006853 data_alloc: 218103808 data_used: 4742436
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182255616 unmapped: 81510400 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x56111984d000 session 0x56111ab0afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6609000/0x0/0x4ffc00000, data 0xf6bf49/0x1201000, compress 0x0/0x0/0x0, omap 0x62827, meta 0x838d7d9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x56111af97400 session 0x56111aaac1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 81494016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 81494016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x5611197bec00 session 0x56111b7b1c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x5611197c0000 session 0x56111b39c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6604000/0x0/0x4ffc00000, data 0xf6dae5/0x1204000, compress 0x0/0x0/0x0, omap 0x624ff, meta 0x838db01), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 190660608 unmapped: 73105408 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f6604000/0x0/0x4ffc00000, data 0xf6dae5/0x1204000, compress 0x0/0x0/0x0, omap 0x624ff, meta 0x838db01), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x5611197df000 session 0x56111ac5afc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 ms_handle_reset con 0x56111984d000 session 0x56111f862fc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182288384 unmapped: 81477632 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 485 ms_handle_reset con 0x56111af97400 session 0x561118d88540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3208904 data_alloc: 218103808 data_used: 4742436
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182296576 unmapped: 81469440 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 485 ms_handle_reset con 0x56111ad52c00 session 0x56111ae59c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182296576 unmapped: 81469440 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 485 ms_handle_reset con 0x5611197c0000 session 0x56111b39d340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 81444864 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 486 ms_handle_reset con 0x5611197df000 session 0x56111f863c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 486 ms_handle_reset con 0x56111984d000 session 0x5611195496c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f4200000/0x0/0x4ffc00000, data 0x3371271/0x360a000, compress 0x0/0x0/0x0, omap 0x611a3, meta 0x838ee5d), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 486 ms_handle_reset con 0x56111af97400 session 0x56111b51c540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182329344 unmapped: 81436672 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182329344 unmapped: 81436672 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.667217255s of 10.084074020s, submitted: 81
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 486 ms_handle_reset con 0x5611197df400 session 0x56111ae59dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3211305 data_alloc: 218103808 data_used: 4742436
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182337536 unmapped: 81428480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x5611197c0000 session 0x561119511500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182337536 unmapped: 81428480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x5611197df000 session 0x56111b51a700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x56111984d000 session 0x561118b876c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x56111af97400 session 0x56111990e380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f41fd000/0x0/0x4ffc00000, data 0x3372e61/0x360d000, compress 0x0/0x0/0x0, omap 0x5feb3, meta 0x839014d), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 81420288 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 487 ms_handle_reset con 0x56111b7a2c00 session 0x56111aaac1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 81420288 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 487 handle_osd_map epochs [487,488], i have 488, src has [1,488]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 488 ms_handle_reset con 0x5611197c0000 session 0x56111e156e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 488 ms_handle_reset con 0x5611197df000 session 0x56111e157180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 81092608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3222400 data_alloc: 218103808 data_used: 4743049
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 81092608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f41d5000/0x0/0x4ffc00000, data 0x339891f/0x3635000, compress 0x0/0x0/0x0, omap 0x65467, meta 0x838ab99), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 488 ms_handle_reset con 0x56111af3d000 session 0x56111b7b0540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 81092608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 81092608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183164928 unmapped: 80601088 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f41d4000/0x0/0x4ffc00000, data 0x3398981/0x3636000, compress 0x0/0x0/0x0, omap 0x65467, meta 0x838ab99), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 489 ms_handle_reset con 0x56111b193800 session 0x56111b51b180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183173120 unmapped: 80592896 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 490 ms_handle_reset con 0x56111b109000 session 0x56111b51b500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291466 data_alloc: 234881024 data_used: 13779755
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183205888 unmapped: 80560128 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.358857155s of 10.488816261s, submitted: 78
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 491 ms_handle_reset con 0x561119817000 session 0x56111990e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 491 ms_handle_reset con 0x5611197c0000 session 0x56111990fc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 491 ms_handle_reset con 0x56111b79bc00 session 0x561119832e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183238656 unmapped: 80527360 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 492 ms_handle_reset con 0x5611197df000 session 0x561119548e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 492 ms_handle_reset con 0x56111af3d000 session 0x56111990f6c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f41c0000/0x0/0x4ffc00000, data 0x339f78a/0x3643000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 80494592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 80494592 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 492 handle_osd_map epochs [492,493], i have 493, src has [1,493]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 493 ms_handle_reset con 0x56111af3d000 session 0x56111b51d880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 493 ms_handle_reset con 0x5611197c0000 session 0x561119511880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 80470016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296870 data_alloc: 234881024 data_used: 13780741
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 80470016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f41c6000/0x0/0x4ffc00000, data 0x33a12d2/0x3644000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 183296000 unmapped: 80470016 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 194117632 unmapped: 69648384 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196657152 unmapped: 67108864 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191553536 unmapped: 72212480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363414 data_alloc: 234881024 data_used: 13851397
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3800000/0x0/0x4ffc00000, data 0x3d392d2/0x3fdc000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191553536 unmapped: 72212480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191553536 unmapped: 72212480 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3800000/0x0/0x4ffc00000, data 0x3d392d2/0x3fdc000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.044831276s of 11.408596992s, submitted: 134
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 493 ms_handle_reset con 0x5611197df000 session 0x5611187a6a80
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191578112 unmapped: 72187904 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 493 handle_osd_map epochs [493,494], i have 493, src has [1,494]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f3800000/0x0/0x4ffc00000, data 0x3d392d2/0x3fdc000, compress 0x0/0x0/0x0, omap 0x65c91, meta 0x838a36f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 494 ms_handle_reset con 0x56111b79bc00 session 0x561118d88380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191578112 unmapped: 72187904 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 495 ms_handle_reset con 0x561119817000 session 0x56111b594e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191578112 unmapped: 72187904 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 495 handle_osd_map epochs [495,496], i have 495, src has [1,496]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 496 ms_handle_reset con 0x561119817000 session 0x56111b595dc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370888 data_alloc: 234881024 data_used: 13851397
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191660032 unmapped: 72105984 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191668224 unmapped: 72097792 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f3825000/0x0/0x4ffc00000, data 0x3d3e6a2/0x3fe5000, compress 0x0/0x0/0x0, omap 0x661fd, meta 0x8389e03), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191791104 unmapped: 71974912 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 497 ms_handle_reset con 0x56111984d000 session 0x5611187a61c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 497 ms_handle_reset con 0x56111af97400 session 0x56111b7b1880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 497 ms_handle_reset con 0x5611197c0000 session 0x56111a52a1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191127552 unmapped: 72638464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f3848000/0x0/0x4ffc00000, data 0x3d1c28b/0x3fc3000, compress 0x0/0x0/0x0, omap 0x66407, meta 0x8389bf9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191127552 unmapped: 72638464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363146 data_alloc: 234881024 data_used: 13742853
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191127552 unmapped: 72638464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191127552 unmapped: 72638464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f3848000/0x0/0x4ffc00000, data 0x3d1c28b/0x3fc3000, compress 0x0/0x0/0x0, omap 0x66407, meta 0x8389bf9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.998230934s of 10.173355103s, submitted: 115
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191135744 unmapped: 72630272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 498 ms_handle_reset con 0x5611197df000 session 0x561118b87a40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 498 handle_osd_map epochs [498,499], i have 498, src has [1,499]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191168512 unmapped: 72597504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 499 heartbeat osd_stat(store_statfs(0x4f3849000/0x0/0x4ffc00000, data 0x3d1c28b/0x3fc3000, compress 0x0/0x0/0x0, omap 0x66407, meta 0x8389bf9), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 499 ms_handle_reset con 0x5611197c0000 session 0x5611195496c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 499 ms_handle_reset con 0x561119817000 session 0x56111990e540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191168512 unmapped: 72597504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 499 heartbeat osd_stat(store_statfs(0x4f3840000/0x0/0x4ffc00000, data 0x3d1f998/0x3fcc000, compress 0x0/0x0/0x0, omap 0x66ce0, meta 0x8389320), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 499 handle_osd_map epochs [499,500], i have 499, src has [1,500]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 500 ms_handle_reset con 0x56111af97400 session 0x56111b39c1c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f3840000/0x0/0x4ffc00000, data 0x3d1f998/0x3fcc000, compress 0x0/0x0/0x0, omap 0x66ce0, meta 0x8389320), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379076 data_alloc: 234881024 data_used: 13747499
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191176704 unmapped: 72589312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 501 ms_handle_reset con 0x56111984d000 session 0x56111b594e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 501 ms_handle_reset con 0x56111af3d000 session 0x56111f862700
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191176704 unmapped: 72589312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191184896 unmapped: 72581120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f3835000/0x0/0x4ffc00000, data 0x3d235c4/0x3fd3000, compress 0x0/0x0/0x0, omap 0x670bc, meta 0x8388f44), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191184896 unmapped: 72581120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 501 ms_handle_reset con 0x5611197c0000 session 0x561119832380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191184896 unmapped: 72581120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 502 ms_handle_reset con 0x561119817000 session 0x56111b51a380
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387813 data_alloc: 234881024 data_used: 13748100
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191201280 unmapped: 72564736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 502 ms_handle_reset con 0x56111984d000 session 0x56111f862e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191201280 unmapped: 72564736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f3833000/0x0/0x4ffc00000, data 0x3d25170/0x3fd7000, compress 0x0/0x0/0x0, omap 0x6714a, meta 0x8388eb6), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191201280 unmapped: 72564736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.963652611s of 11.022150993s, submitted: 45
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191217664 unmapped: 72548352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 504 ms_handle_reset con 0x56111af97400 session 0x56111f863880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191225856 unmapped: 72540160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 504 heartbeat osd_stat(store_statfs(0x4f382d000/0x0/0x4ffc00000, data 0x3d288fc/0x3fdd000, compress 0x0/0x0/0x0, omap 0x66a7a, meta 0x8389586), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405418 data_alloc: 234881024 data_used: 13748799
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191225856 unmapped: 72540160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191225856 unmapped: 72540160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 504 ms_handle_reset con 0x56111b193800 session 0x56111b7b1c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191250432 unmapped: 72515584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 504 handle_osd_map epochs [504,505], i have 504, src has [1,505]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 505 ms_handle_reset con 0x5611197c0000 session 0x56111b594540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191275008 unmapped: 72491008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 505 ms_handle_reset con 0x56111b193800 session 0x56111ab3ee00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 506 ms_handle_reset con 0x561119817000 session 0x56111a52a000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 506 heartbeat osd_stat(store_statfs(0x4f382a000/0x0/0x4ffc00000, data 0x3d2a4b4/0x3fe0000, compress 0x0/0x0/0x0, omap 0x66b08, meta 0x83894f8), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191275008 unmapped: 72491008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412970 data_alloc: 234881024 data_used: 14301320
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191275008 unmapped: 72491008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 191283200 unmapped: 72482816 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 507 handle_osd_map epochs [507,508], i have 507, src has [1,508]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 508 ms_handle_reset con 0x56111984d000 session 0x56111b64f180
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 508 ms_handle_reset con 0x56111af97400 session 0x56111b64fc00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192069632 unmapped: 71696384 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f3821000/0x0/0x4ffc00000, data 0x3d2f868/0x3fe9000, compress 0x0/0x0/0x0, omap 0x66efe, meta 0x8389102), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f3821000/0x0/0x4ffc00000, data 0x3d2f868/0x3fe9000, compress 0x0/0x0/0x0, omap 0x66efe, meta 0x8389102), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 508 handle_osd_map epochs [509,509], i have 508, src has [1,509]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 508 handle_osd_map epochs [508,509], i have 509, src has [1,509]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.261071205s of 10.357838631s, submitted: 43
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192086016 unmapped: 71680000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 509 ms_handle_reset con 0x5611197c0000 session 0x56111b7b0e00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192086016 unmapped: 71680000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420553 data_alloc: 234881024 data_used: 14301320
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 509 handle_osd_map epochs [509,510], i have 509, src has [1,510]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 510 heartbeat osd_stat(store_statfs(0x4f381d000/0x0/0x4ffc00000, data 0x3d3149c/0x3feb000, compress 0x0/0x0/0x0, omap 0x673bf, meta 0x8388c41), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192094208 unmapped: 71671808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 510 ms_handle_reset con 0x561119817000 session 0x56111b51da40
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 510 ms_handle_reset con 0x56111984d000 session 0x56111b51b880
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 192102400 unmapped: 71663616 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 194969600 unmapped: 68796416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 68599808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 68542464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471954 data_alloc: 234881024 data_used: 19795778
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 68542464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f3418000/0x0/0x4ffc00000, data 0x4136277/0x43f2000, compress 0x0/0x0/0x0, omap 0x67b2b, meta 0x83884d5), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 68534272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 68534272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.824200630s of 10.004947662s, submitted: 92
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 67502080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196263936 unmapped: 67502080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497044 data_alloc: 234881024 data_used: 19795778
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497188 data_alloc: 234881024 data_used: 19795778
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 196272128 unmapped: 67493888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3015000/0x0/0x4ffc00000, data 0x4537d2e/0x47f5000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532516 data_alloc: 234881024 data_used: 23859010
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f2b10000/0x0/0x4ffc00000, data 0x4a3ed2e/0x4cfc000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f2b10000/0x0/0x4ffc00000, data 0x4a3ed2e/0x4cfc000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532516 data_alloc: 234881024 data_used: 23859010
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f2b10000/0x0/0x4ffc00000, data 0x4a3ed2e/0x4cfc000, compress 0x0/0x0/0x0, omap 0x67bd9, meta 0x8388427), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200409088 unmapped: 63356928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.329088211s of 20.369134903s, submitted: 28
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111b79bc00 session 0x56111b51ddc0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111b109000 session 0x5611187a61c0
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x5611197c0000 session 0x561119511340
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f2b10000/0x0/0x4ffc00000, data 0x4a3ed2e/0x4cfc000, compress 0x0/0x0/0x0, omap 0x671a1, meta 0x8388e5f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536172 data_alloc: 234881024 data_used: 25030466
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x561119817000 session 0x561118a3d500
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111984d000 session 0x56111e156540
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3818000/0x0/0x4ffc00000, data 0x3d37d1e/0x3ff4000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455560 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111b79bc00 session 0x56111e156000
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 ms_handle_reset con 0x56111b193800 session 0x5611187a7c00
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200179712 unmapped: 63586304 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200187904 unmapped: 63578112 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200196096 unmapped: 63569920 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200204288 unmapped: 63561728 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200212480 unmapped: 63553536 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200212480 unmapped: 63553536 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200212480 unmapped: 63553536 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: mgrc ms_handle_reset ms_handle_reset con 0x56111aedf400
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2811058765
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2811058765,v1:192.168.122.100:6801/2811058765]
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: mgrc handle_mgr_configure stats_period=5
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200417280 unmapped: 63348736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200425472 unmapped: 63340544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200433664 unmapped: 63332352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200433664 unmapped: 63332352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 63324160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200450048 unmapped: 63315968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200450048 unmapped: 63315968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 63307776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'config diff' '{prefix=config diff}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'config show' '{prefix=config show}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200728576 unmapped: 63037440 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 200876032 unmapped: 62889984 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'log dump' '{prefix=log dump}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'perf dump' '{prefix=perf dump}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'perf schema' '{prefix=perf schema}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201277440 unmapped: 62488576 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201277440 unmapped: 62488576 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201277440 unmapped: 62488576 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201277440 unmapped: 62488576 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201277440 unmapped: 62488576 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201277440 unmapped: 62488576 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201277440 unmapped: 62488576 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201285632 unmapped: 62480384 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201293824 unmapped: 62472192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201293824 unmapped: 62472192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201293824 unmapped: 62472192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201293824 unmapped: 62472192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201293824 unmapped: 62472192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 62464000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 62464000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 62464000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 62464000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 62464000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 62464000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457328 data_alloc: 234881024 data_used: 22884146
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201302016 unmapped: 62464000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 112.833915710s of 112.986618042s, submitted: 28
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 62431232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 62431232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 62431232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 62431232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 62431232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 62431232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 62431232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201334784 unmapped: 62431232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 62423040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 62423040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 62423040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 62423040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 62423040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 62423040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 62423040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201342976 unmapped: 62423040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 62414848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 62414848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 62414848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 62414848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 62414848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 62414848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 62414848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 62414848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201367552 unmapped: 62398464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201367552 unmapped: 62398464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201375744 unmapped: 62390272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201375744 unmapped: 62390272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201383936 unmapped: 62382080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201383936 unmapped: 62382080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201383936 unmapped: 62382080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201383936 unmapped: 62382080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201383936 unmapped: 62382080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201383936 unmapped: 62382080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 62373888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201408512 unmapped: 62357504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201408512 unmapped: 62357504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201416704 unmapped: 62349312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201424896 unmapped: 62341120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 62332928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201441280 unmapped: 62324736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201449472 unmapped: 62316544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 62308352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 62308352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 62308352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 62300160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 62291968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 62291968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 62291968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 62291968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 62291968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 62291968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 62291968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 62291968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 62283776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201490432 unmapped: 62275584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201490432 unmapped: 62275584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201490432 unmapped: 62275584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201072640 unmapped: 62693376 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201080832 unmapped: 62685184 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201089024 unmapped: 62676992 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201105408 unmapped: 62660608 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62652416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62652416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62652416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62652416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62652416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62652416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62652416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 62652416 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201121792 unmapped: 62644224 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3001.1 total, 600.0 interval#012Cumulative writes: 23K writes, 97K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 23K writes, 8358 syncs, 2.83 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4233 writes, 21K keys, 4233 commit groups, 1.0 writes per commit group, ingest: 13.75 MB, 0.02 MB/s#012Interval WAL: 4233 writes, 1779 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201129984 unmapped: 62636032 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201138176 unmapped: 62627840 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201138176 unmapped: 62627840 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201138176 unmapped: 62627840 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201138176 unmapped: 62627840 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201138176 unmapped: 62627840 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201138176 unmapped: 62627840 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201146368 unmapped: 62619648 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201154560 unmapped: 62611456 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 62603264 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 62603264 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 62603264 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 62603264 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 62603264 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 62603264 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201162752 unmapped: 62603264 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201170944 unmapped: 62595072 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 62586880 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 62586880 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 62586880 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 62586880 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 62586880 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 62586880 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 62586880 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201179136 unmapped: 62586880 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201187328 unmapped: 62578688 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201187328 unmapped: 62578688 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 62570496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 62570496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 62570496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 62570496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 62570496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 62570496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 295.871520996s of 295.908813477s, submitted: 24
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 201195520 unmapped: 62570496 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [1])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202317824 unmapped: 61448192 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 61440000 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202334208 unmapped: 61431808 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202342400 unmapped: 61423616 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202342400 unmapped: 61423616 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202342400 unmapped: 61423616 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202350592 unmapped: 61415424 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202350592 unmapped: 61415424 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202350592 unmapped: 61415424 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202350592 unmapped: 61415424 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202350592 unmapped: 61415424 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 61407232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 61407232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 61407232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 61407232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 61407232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 61407232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202358784 unmapped: 61407232 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202366976 unmapped: 61399040 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 61390848 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 61374464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 61374464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 61374464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 61374464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 61374464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 61374464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 61374464 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202399744 unmapped: 61366272 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 61358080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 61358080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 61358080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202416128 unmapped: 61349888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202416128 unmapped: 61349888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202416128 unmapped: 61349888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202416128 unmapped: 61349888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202416128 unmapped: 61349888 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202424320 unmapped: 61341696 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202424320 unmapped: 61341696 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 61333504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 61333504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 61333504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 61333504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 61333504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 61333504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 61333504 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202440704 unmapped: 61325312 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202448896 unmapped: 61317120 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 61308928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 61308928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 61308928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202457088 unmapped: 61308928 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202465280 unmapped: 61300736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202465280 unmapped: 61300736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202465280 unmapped: 61300736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202465280 unmapped: 61300736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202465280 unmapped: 61300736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202465280 unmapped: 61300736 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202473472 unmapped: 61292544 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202481664 unmapped: 61284352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202481664 unmapped: 61284352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202481664 unmapped: 61284352 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202489856 unmapped: 61276160 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202498048 unmapped: 61267968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202498048 unmapped: 61267968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202498048 unmapped: 61267968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202498048 unmapped: 61267968 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 61259776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202506240 unmapped: 61259776 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202514432 unmapped: 61251584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202514432 unmapped: 61251584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202514432 unmapped: 61251584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202514432 unmapped: 61251584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202514432 unmapped: 61251584 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202530816 unmapped: 61235200 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202539008 unmapped: 61227008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202539008 unmapped: 61227008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202539008 unmapped: 61227008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202539008 unmapped: 61227008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202539008 unmapped: 61227008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202539008 unmapped: 61227008 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202555392 unmapped: 61210624 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 61194240 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 61194240 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 61194240 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 61194240 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 61194240 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 61194240 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 61194240 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 61194240 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: osd.2 513 heartbeat osd_stat(store_statfs(0x4f3817000/0x0/0x4ffc00000, data 0x3d37d2e/0x3ff5000, compress 0x0/0x0/0x0, omap 0x672c1, meta 0x8388d3f), peers [0,1] op hist [])
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457184 data_alloc: 234881024 data_used: 22884452
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202579968 unmapped: 61186048 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'config diff' '{prefix=config diff}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'config show' '{prefix=config show}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202522624 unmapped: 61243392 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 61358080 heap: 263766016 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:16 np0005603435 ceph-osd[87920]: do_command 'log dump' '{prefix=log dump}'
Jan 31 00:14:16 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 31 00:14:16 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748292026' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 00:14:16 np0005603435 nova_compute[239938]: 2026-01-31 05:14:16.943 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19474 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:17 np0005603435 nova_compute[239938]: 2026-01-31 05:14:17.060 239942 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 00:14:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 00:14:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785710496' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19478 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} v 0)
Jan 31 00:14:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} : dispatch
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.807126732242677e-06 of space, bias 1.0, pg target 0.002642138019672803 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029091159681203568 of space, bias 1.0, pg target 0.872734790436107 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2170727414265735e-06 of space, bias 1.0, pg target 0.00036512182242797207 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669402130260096 of space, bias 1.0, pg target 0.20008206390780287 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.278901575288092e-07 of space, bias 4.0, pg target 0.000993468189034571 quantized to 16 (current 16)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 00:14:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 00:14:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3578474183' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 00:14:17 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19482 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:17 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} v 0)
Jan 31 00:14:17 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/861726551' entity='mgr.compute-0.wyngmr' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.zvcgqa", "name": "rgw_frontends"} : dispatch
Jan 31 00:14:18 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:18 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19486 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 00:14:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224113208' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 00:14:18 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19488 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:18 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 00:14:18 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508509922' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 00:14:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 00:14:19 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19492 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 00:14:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 00:14:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1922615280' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 00:14:19 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19496 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 00:14:19 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 31 00:14:19 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999330463' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 00:14:20 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19500 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 00:14:20 np0005603435 ceph-mgr[75599]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 31 00:14:20 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19504 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 244 ms_handle_reset con 0x55b8218e0000 session 0x55b81caed500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 245 ms_handle_reset con 0x55b81dc13400 session 0x55b81db06380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 245 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81df056c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 245 ms_handle_reset con 0x55b81d85ac00 session 0x55b81b38a1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128241 data_alloc: 234881024 data_used: 26665999
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 153165824 unmapped: 5881856 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f824e000/0x0/0x4ffc00000, data 0x3abafc5/0x3c24000, compress 0x0/0x0/0x0, omap 0x3965f, meta 0x3d369a1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 245 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81dc12400 session 0x55b81de1ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 6856704 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81dc12800 session 0x55b81df688c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81d85ac00 session 0x55b81a771880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81dc12400 session 0x55b81d230e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 6856704 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 246 ms_handle_reset con 0x55b81dc12800 session 0x55b81b38b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 247 ms_handle_reset con 0x55b81dc13400 session 0x55b81b38a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 6848512 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 247 ms_handle_reset con 0x55b8218e0400 session 0x55b81ab5d180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 6701056 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 248 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b8bf500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f8262000/0x0/0x4ffc00000, data 0x3abe5e0/0x3c2a000, compress 0x0/0x0/0x0, omap 0x39aa0, meta 0x3d36560), peers [0,2] op hist [0,0,0,0,0,1,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 248 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81a800380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2129186 data_alloc: 234881024 data_used: 26666682
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 248 ms_handle_reset con 0x55b81d85ac00 session 0x55b81de1a1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152354816 unmapped: 6692864 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 248 ms_handle_reset con 0x55b81dc12400 session 0x55b81d4f2a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152363008 unmapped: 6684672 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f825c000/0x0/0x4ffc00000, data 0x3ac01de/0x3c2e000, compress 0x0/0x0/0x0, omap 0x39c4b, meta 0x3d363b5), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152363008 unmapped: 6684672 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 249 ms_handle_reset con 0x55b81dc12800 session 0x55b81b887a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 249 ms_handle_reset con 0x55b81dc13400 session 0x55b81b887880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152182784 unmapped: 6864896 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f825c000/0x0/0x4ffc00000, data 0x3ac1cfa/0x3c2e000, compress 0x0/0x0/0x0, omap 0x3a369, meta 0x3d35c97), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152182784 unmapped: 6864896 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 249 ms_handle_reset con 0x55b81d85ac00 session 0x55b81df68540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.074841499s of 10.528203011s, submitted: 125
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f825c000/0x0/0x4ffc00000, data 0x3ac1cfa/0x3c2e000, compress 0x0/0x0/0x0, omap 0x3a369, meta 0x3d35c97), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2130229 data_alloc: 234881024 data_used: 26667274
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152166400 unmapped: 6881280 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152166400 unmapped: 6881280 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 250 ms_handle_reset con 0x55b8218e0800 session 0x55b81dfab6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152297472 unmapped: 6750208 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b38bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b8218e0c00 session 0x55b81b8bf880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81dfaba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f8253000/0x0/0x4ffc00000, data 0x3ac5a1d/0x3c37000, compress 0x0/0x0/0x0, omap 0x3a620, meta 0x3d359e0), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 6586368 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 6586368 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b81dc12400 session 0x55b81df68a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2140794 data_alloc: 234881024 data_used: 26923274
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 251 ms_handle_reset con 0x55b81dc12c00 session 0x55b81ac2a000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151543808 unmapped: 7503872 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 252 ms_handle_reset con 0x55b81d85ac00 session 0x55b81caece00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 253 ms_handle_reset con 0x55b81dc13400 session 0x55b81b7aa700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 253 ms_handle_reset con 0x55b81dc13c00 session 0x55b81ac04540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 253 ms_handle_reset con 0x55b81dc13800 session 0x55b81b86f340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151552000 unmapped: 7495680 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81df68700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81dc12400 session 0x55b81b38ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f824c000/0x0/0x4ffc00000, data 0x3ac8bb9/0x3c3c000, compress 0x0/0x0/0x0, omap 0x3aaac, meta 0x3d35554), peers [0,2] op hist [1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81d85ac00 session 0x55b81d1d88c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 13303808 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81bb33c00 session 0x55b81b443c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81dc13800 session 0x55b81d1d8c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 254 ms_handle_reset con 0x55b81d85a400 session 0x55b81cef1340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 13303808 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 255 ms_handle_reset con 0x55b81dc13c00 session 0x55b81aa63180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 255 ms_handle_reset con 0x55b81de51000 session 0x55b81cef0c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 255 ms_handle_reset con 0x55b81dc12000 session 0x55b81ac2a1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 13303808 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.642595291s of 10.042186737s, submitted: 182
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81dc12c00 session 0x55b81ac4f500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b8be380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1813890 data_alloc: 218103808 data_used: 7188828
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81dc12000 session 0x55b81aa628c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81d85a400 session 0x55b81ac4e8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81dc13800 session 0x55b81de1a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 22650880 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 22650880 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 heartbeat osd_stat(store_statfs(0x4fa8da000/0x0/0x4ffc00000, data 0x120ea3a/0x1383000, compress 0x0/0x0/0x0, omap 0x3db2f, meta 0x3d324d1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 ms_handle_reset con 0x55b81d85a400 session 0x55b81da39340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 22634496 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 heartbeat osd_stat(store_statfs(0x4fab07000/0x0/0x4ffc00000, data 0x120eaac/0x1385000, compress 0x0/0x0/0x0, omap 0x3db2f, meta 0x3d324d1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 257 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b38bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 22634496 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 258 ms_handle_reset con 0x55b81dc12000 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 258 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b442a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 22634496 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 258 ms_handle_reset con 0x55b81de51000 session 0x55b81aa70fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 259 heartbeat osd_stat(store_statfs(0x4faafd000/0x0/0x4ffc00000, data 0x121223c/0x138b000, compress 0x0/0x0/0x0, omap 0x3dfe1, meta 0x3d3201f), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812999 data_alloc: 218103808 data_used: 6803212
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 22618112 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 260 heartbeat osd_stat(store_statfs(0x4faaf7000/0x0/0x4ffc00000, data 0x1215b02/0x1391000, compress 0x0/0x0/0x0, omap 0x3e623, meta 0x3d319dd), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136445952 unmapped: 22601728 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 260 ms_handle_reset con 0x55b81dc13c00 session 0x55b81b7aaa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 260 ms_handle_reset con 0x55b81dc12000 session 0x55b81d4f3dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 22593536 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 261 heartbeat osd_stat(store_statfs(0x4faaf7000/0x0/0x4ffc00000, data 0x1215ab0/0x1391000, compress 0x0/0x0/0x0, omap 0x3e623, meta 0x3d319dd), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 261 heartbeat osd_stat(store_statfs(0x4faaf2000/0x0/0x4ffc00000, data 0x12176bc/0x1394000, compress 0x0/0x0/0x0, omap 0x3eaad, meta 0x3d31553), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 22568960 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 262 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2a1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 262 ms_handle_reset con 0x55b81dc12c00 session 0x55b81da2efc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 22552576 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.671566010s of 10.007779121s, submitted: 194
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1825652 data_alloc: 218103808 data_used: 6803895
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 22528000 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 263 ms_handle_reset con 0x55b8218e0800 session 0x55b81caed180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 263 ms_handle_reset con 0x55b81d85a400 session 0x55b81df636c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 22528000 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 263 ms_handle_reset con 0x55b8218e1000 session 0x55b81b443180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 263 heartbeat osd_stat(store_statfs(0x4faaf3000/0x0/0x4ffc00000, data 0x121ae8c/0x1399000, compress 0x0/0x0/0x0, omap 0x3efd9, meta 0x3d31027), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137576448 unmapped: 21471232 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 263 heartbeat osd_stat(store_statfs(0x4faaf3000/0x0/0x4ffc00000, data 0x121ae8c/0x1399000, compress 0x0/0x0/0x0, omap 0x3efd9, meta 0x3d31027), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137592832 unmapped: 21454848 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 264 ms_handle_reset con 0x55b81dc12000 session 0x55b81b8bf880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137601024 unmapped: 21446656 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1828649 data_alloc: 218103808 data_used: 6803895
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 264 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81dfaba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 264 ms_handle_reset con 0x55b81dc12c00 session 0x55b81ac04380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 21405696 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 265 ms_handle_reset con 0x55b81dc12c00 session 0x55b81a800700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 265 heartbeat osd_stat(store_statfs(0x4faaec000/0x0/0x4ffc00000, data 0x121e650/0x139c000, compress 0x0/0x0/0x0, omap 0x3faf9, meta 0x3d30507), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1829759 data_alloc: 218103808 data_used: 6803781
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.645518303s of 11.900569916s, submitted: 157
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 265 ms_handle_reset con 0x55b81d85a400 session 0x55b81d4f3880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 265 heartbeat osd_stat(store_statfs(0x4faaee000/0x0/0x4ffc00000, data 0x121e6c1/0x139e000, compress 0x0/0x0/0x0, omap 0x3fdbd, meta 0x3d30243), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 21389312 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 266 ms_handle_reset con 0x55b81dc12000 session 0x55b81db06380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 266 ms_handle_reset con 0x55b81dc13c00 session 0x55b81caec8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138108928 unmapped: 20938752 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b8218e1000 session 0x55b81aa63500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b81d85a400 session 0x55b81ac4fa40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b8218e1400 session 0x55b81da396c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138133504 unmapped: 20914176 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 267 heartbeat osd_stat(store_statfs(0x4faae3000/0x0/0x4ffc00000, data 0x122235d/0x13a5000, compress 0x0/0x0/0x0, omap 0x4042f, meta 0x3d2fbd1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1842649 data_alloc: 218103808 data_used: 6803879
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 267 ms_handle_reset con 0x55b81dc12c00 session 0x55b81d095340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81dc12000 session 0x55b81de1a8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138141696 unmapped: 20905984 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81de1b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138141696 unmapped: 20905984 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81d85a400 session 0x55b81b38ac40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81dc12c00 session 0x55b81ac4f500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 21348352 heap: 159047680 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81dc12000 session 0x55b81b443340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b8218e1400 session 0x55b81ced4700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81cef0c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138878976 unmapped: 27516928 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138878976 unmapped: 27516928 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b887500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 heartbeat osd_stat(store_statfs(0x4fa58b000/0x0/0x4ffc00000, data 0x17790a1/0x1901000, compress 0x0/0x0/0x0, omap 0x408b0, meta 0x3d2f750), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1897009 data_alloc: 218103808 data_used: 6803977
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 269 ms_handle_reset con 0x55b81dc12000 session 0x55b81b8bec40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 269 ms_handle_reset con 0x55b81dc13c00 session 0x55b81b4f2000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138903552 unmapped: 27492352 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 269 heartbeat osd_stat(store_statfs(0x4fa58a000/0x0/0x4ffc00000, data 0x1779103/0x1902000, compress 0x0/0x0/0x0, omap 0x408b0, meta 0x3d2f750), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 269 ms_handle_reset con 0x55b8218e1800 session 0x55b81aa62fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 270 heartbeat osd_stat(store_statfs(0x4fa584000/0x0/0x4ffc00000, data 0x177ad1d/0x1906000, compress 0x0/0x0/0x0, omap 0x40ff5, meta 0x3d2f00b), peers [0,2] op hist [0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 270 ms_handle_reset con 0x55b8218e1000 session 0x55b81b38b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 270 ms_handle_reset con 0x55b81dc12000 session 0x55b81da2fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138518528 unmapped: 27877376 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 270 ms_handle_reset con 0x55b81d85a400 session 0x55b81b443340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 270 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b4f2a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.375273705s of 10.478682518s, submitted: 139
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81dc13c00 session 0x55b81da39dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81dc12c00 session 0x55b81d1368c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81dc13c00 session 0x55b81a800700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138526720 unmapped: 27869184 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138526720 unmapped: 27869184 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81d85a400 session 0x55b81b38bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 271 ms_handle_reset con 0x55b81dc12000 session 0x55b81b86f880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138551296 unmapped: 27844608 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1908693 data_alloc: 218103808 data_used: 6804660
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 27828224 heap: 166395904 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b81dc12000 session 0x55b81ced4700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b81dc12c00 session 0x55b81b8bf180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81d4f2c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b81dc13c00 session 0x55b81b8bfdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 272 heartbeat osd_stat(store_statfs(0x4f9ad7000/0x0/0x4ffc00000, data 0x2223127/0x23b3000, compress 0x0/0x0/0x0, omap 0x42ab9, meta 0x3d2d547), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 272 ms_handle_reset con 0x55b8218e1c00 session 0x55b81ac2b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138321920 unmapped: 34381824 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b8218e1000 session 0x55b81d4f2540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b81d85a400 session 0x55b81aa63a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac056c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138338304 unmapped: 34365440 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b81dc12000 session 0x55b81b442a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 273 ms_handle_reset con 0x55b81dc12c00 session 0x55b81da2efc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138346496 unmapped: 34357248 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138346496 unmapped: 34357248 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 274 ms_handle_reset con 0x55b81dc13c00 session 0x55b81caecc40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1988471 data_alloc: 218103808 data_used: 6805387
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138354688 unmapped: 34349056 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f9ad1000/0x0/0x4ffc00000, data 0x22268df/0x23b9000, compress 0x0/0x0/0x0, omap 0x4337e, meta 0x3d2cc82), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 275 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81aa71180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138403840 unmapped: 34299904 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 275 ms_handle_reset con 0x55b81d85a400 session 0x55b81df69500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.215954304s of 10.120274544s, submitted: 172
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138403840 unmapped: 34299904 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 138354688 unmapped: 34349056 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f9ad0000/0x0/0x4ffc00000, data 0x2228308/0x23ba000, compress 0x0/0x0/0x0, omap 0x43b29, meta 0x3d2c4d7), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140656640 unmapped: 32047104 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2053231 data_alloc: 234881024 data_used: 16820189
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140656640 unmapped: 32047104 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 276 ms_handle_reset con 0x55b81dc12c00 session 0x55b81d4f2000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f9acd000/0x0/0x4ffc00000, data 0x2229f14/0x23bd000, compress 0x0/0x0/0x0, omap 0x43edd, meta 0x3d2c123), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 277 ms_handle_reset con 0x55b81b8bd800 session 0x55b81aa62a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 277 ms_handle_reset con 0x55b81dc12000 session 0x55b81d370540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 32194560 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 32178176 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 277 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81df62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 32169984 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 32153600 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2059076 data_alloc: 234881024 data_used: 17344394
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 32129024 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81dc12c00 session 0x55b81a800fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f9aca000/0x0/0x4ffc00000, data 0x222d53d/0x23c2000, compress 0x0/0x0/0x0, omap 0x44843, meta 0x3d2b7bd), peers [0,2] op hist [0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140582912 unmapped: 32120832 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.583833694s of 10.010669708s, submitted: 96
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81d85a400 session 0x55b81b7abc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81b8bd400 session 0x55b81cef0c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81dc13c00 session 0x55b81ced56c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 140632064 unmapped: 32071680 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 278 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b38a8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147136512 unmapped: 25567232 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 279 ms_handle_reset con 0x55b81dc12000 session 0x55b81d136000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 25165824 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 279 heartbeat osd_stat(store_statfs(0x4f8c4f000/0x0/0x4ffc00000, data 0x30a5c3a/0x323a000, compress 0x0/0x0/0x0, omap 0x44ee1, meta 0x3d2b11f), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 279 ms_handle_reset con 0x55b81d85a400 session 0x55b81aa62fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2157925 data_alloc: 234881024 data_used: 18143516
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147546112 unmapped: 25157632 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 279 ms_handle_reset con 0x55b81dc12c00 session 0x55b81da38c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 25837568 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 280 ms_handle_reset con 0x55b81d85a400 session 0x55b81dddda40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146939904 unmapped: 25763840 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 281 ms_handle_reset con 0x55b81dc12000 session 0x55b81da39180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 146956288 unmapped: 25747456 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 282 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81da2fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 282 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81d230e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 282 ms_handle_reset con 0x55b81dc13c00 session 0x55b81dfaac40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147021824 unmapped: 25681920 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2122863 data_alloc: 234881024 data_used: 18290362
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 282 heartbeat osd_stat(store_statfs(0x4f93d9000/0x0/0x4ffc00000, data 0x291add7/0x2ab1000, compress 0x0/0x0/0x0, omap 0x45ab8, meta 0x3d2a548), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147021824 unmapped: 25681920 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 282 handle_osd_map epochs [284,284], i have 282, src has [1,284]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 282 handle_osd_map epochs [283,284], i have 282, src has [1,284]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147038208 unmapped: 25665536 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 284 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81da39c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147070976 unmapped: 25632768 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 284 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b38aa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147070976 unmapped: 25632768 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147079168 unmapped: 25624576 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 284 ms_handle_reset con 0x55b81d85a400 session 0x55b81ac4f340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.923368454s of 12.897541046s, submitted: 236
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2129695 data_alloc: 234881024 data_used: 18298554
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147079168 unmapped: 25624576 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 285 ms_handle_reset con 0x55b81dc12000 session 0x55b81b38b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x2920046/0x2aba000, compress 0x0/0x0/0x0, omap 0x4602d, meta 0x3d29fd3), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147111936 unmapped: 25591808 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 286 ms_handle_reset con 0x55b81de50800 session 0x55b81dfab340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147111936 unmapped: 25591808 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 286 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81a771dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147120128 unmapped: 25583616 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147120128 unmapped: 25583616 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 286 ms_handle_reset con 0x55b81b7d5000 session 0x55b81de1a000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2145526 data_alloc: 234881024 data_used: 18314938
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147120128 unmapped: 25583616 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147120128 unmapped: 25583616 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 286 ms_handle_reset con 0x55b81d85a400 session 0x55b81d4f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 287 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81df636c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 287 ms_handle_reset con 0x55b81dc12000 session 0x55b81b7ab6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f91c8000/0x0/0x4ffc00000, data 0x2b248eb/0x2cc2000, compress 0x0/0x0/0x0, omap 0x4698d, meta 0x3d29673), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147144704 unmapped: 25559040 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147169280 unmapped: 25534464 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 288 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81aa63180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 288 ms_handle_reset con 0x55b81b7d5000 session 0x55b81d231180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 288 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81a771340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 23117824 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 288 ms_handle_reset con 0x55b81d85a400 session 0x55b81d4f2c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.877266884s of 10.104944229s, submitted: 87
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 289 ms_handle_reset con 0x55b81e0e6000 session 0x55b81d4f3dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2261435 data_alloc: 234881024 data_used: 25721116
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 289 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b38a380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 289 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81aa62a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 289 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162783232 unmapped: 9920512 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81d85a400 session 0x55b81aa63c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81e0e6000 session 0x55b81a801880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162881536 unmapped: 9822208 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b7d5000 session 0x55b81d4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b8be380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81ac2ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 heartbeat osd_stat(store_statfs(0x4f77d5000/0x0/0x4ffc00000, data 0x3374061/0x3517000, compress 0x0/0x0/0x0, omap 0x48101, meta 0x4ec7eff), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81d243c00 session 0x55b81b86f340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81d85a400 session 0x55b81d4f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81de2c000 session 0x55b81d4f2700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b38b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 18513920 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81d1368c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ced4380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 18513920 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81de2d000 session 0x55b81b38aa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 ms_handle_reset con 0x55b81d243c00 session 0x55b81ac2b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 18505728 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2254156 data_alloc: 234881024 data_used: 25722711
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154206208 unmapped: 18497536 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 291 ms_handle_reset con 0x55b81b7d5000 session 0x55b81d1376c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154222592 unmapped: 18481152 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81aa63180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 heartbeat osd_stat(store_statfs(0x4f77c8000/0x0/0x4ffc00000, data 0x337788b/0x3520000, compress 0x0/0x0/0x0, omap 0x494f4, meta 0x4ec6b0c), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac2bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81de2c000 session 0x55b81b8bf180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154099712 unmapped: 18604032 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81de2c000 session 0x55b81ab5d340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b8be000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81d370540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b887dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 ms_handle_reset con 0x55b81d243c00 session 0x55b81df69a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154124288 unmapped: 18579456 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 293 ms_handle_reset con 0x55b81b7d5000 session 0x55b8190f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 293 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b86fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 293 ms_handle_reset con 0x55b81de2c000 session 0x55b81b8bfdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154140672 unmapped: 18563072 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.855691910s of 10.001125336s, submitted: 233
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2266773 data_alloc: 234881024 data_used: 25723037
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 18554880 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 293 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac05c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81de4e000 session 0x55b81ac2b340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154140672 unmapped: 18563072 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81cf19000 session 0x55b81caec700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f77c6000/0x0/0x4ffc00000, data 0x337af08/0x3524000, compress 0x0/0x0/0x0, omap 0x4a405, meta 0x4ec5bfb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154140672 unmapped: 18563072 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81b7d5000 session 0x55b81de1ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 18554880 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f77c5000/0x0/0x4ffc00000, data 0x337af2b/0x3525000, compress 0x0/0x0/0x0, omap 0x4a405, meta 0x4ec5bfb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81de2c000 session 0x55b81a8016c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f77c5000/0x0/0x4ffc00000, data 0x337af2b/0x3525000, compress 0x0/0x0/0x0, omap 0x4a489, meta 0x4ec5b77), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154607616 unmapped: 18096128 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81caedc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2274365 data_alloc: 234881024 data_used: 26831406
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 295 ms_handle_reset con 0x55b81d2acc00 session 0x55b81caed340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154542080 unmapped: 18161664 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 295 ms_handle_reset con 0x55b81d2acc00 session 0x55b81b38b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 295 handle_osd_map epochs [296,296], i have 296, src has [1,296]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 296 ms_handle_reset con 0x55b81b7d5000 session 0x55b81dfaa1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 296 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81da2efc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154550272 unmapped: 18153472 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f77be000/0x0/0x4ffc00000, data 0x337e552/0x352a000, compress 0x0/0x0/0x0, omap 0x4ab7d, meta 0x4ec5483), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154566656 unmapped: 18137088 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 296 ms_handle_reset con 0x55b81dfe7000 session 0x55b81d4f2c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81dfe7c00 session 0x55b81b38a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154624000 unmapped: 18079744 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81b7d5000 session 0x55b81caed880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81dfe7c00 session 0x55b81df68540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81d2acc00 session 0x55b81b38b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 18006016 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f77bd000/0x0/0x4ffc00000, data 0x3380142/0x352d000, compress 0x0/0x0/0x0, omap 0x4ae2d, meta 0x4ec51d3), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2282531 data_alloc: 234881024 data_used: 26835486
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 18006016 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.603608131s of 10.748081207s, submitted: 90
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81b442e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 297 ms_handle_reset con 0x55b81dfe7000 session 0x55b81d1376c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 155762688 unmapped: 16941056 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 155762688 unmapped: 16941056 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 298 heartbeat osd_stat(store_statfs(0x4f77ba000/0x0/0x4ffc00000, data 0x3381bc1/0x3530000, compress 0x0/0x0/0x0, omap 0x4b3ce, meta 0x4ec4c32), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81b7d5000 session 0x55b81dfaac40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81dfe7000 session 0x55b81d1368c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81d2acc00 session 0x55b81b86f340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156164096 unmapped: 16539648 heap: 172703744 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81ac2bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81dfe7c00 session 0x55b81caec700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f76bb000/0x0/0x4ffc00000, data 0x347f75d/0x362f000, compress 0x0/0x0/0x0, omap 0x4b569, meta 0x4ec4a97), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81b7d5000 session 0x55b81a8016c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 23838720 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2372891 data_alloc: 251658240 data_used: 28237955
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 ms_handle_reset con 0x55b81d2acc00 session 0x55b81df68380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 23838720 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81ac2b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81dfe7000 session 0x55b81b8868c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81b2b5000 session 0x55b81da2e000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156803072 unmapped: 23781376 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81b7d5000 session 0x55b81ac2b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81d2acc00 session 0x55b81b38a000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81da39dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156803072 unmapped: 23781376 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81dfe7000 session 0x55b81b887500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156819456 unmapped: 23764992 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 ms_handle_reset con 0x55b81b2b4000 session 0x55b81df68fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 301 ms_handle_reset con 0x55b81ceb5800 session 0x55b81caed340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158056448 unmapped: 22528000 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f76b5000/0x0/0x4ffc00000, data 0x3482f05/0x3635000, compress 0x0/0x0/0x0, omap 0x4bdb1, meta 0x4ec424f), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 301 ms_handle_reset con 0x55b81b2b4000 session 0x55b81da39880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2317905 data_alloc: 251658240 data_used: 28237955
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 301 handle_osd_map epochs [301,302], i have 302, src has [1,302]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 302 ms_handle_reset con 0x55b81b7d5000 session 0x55b81a771340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 22470656 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 302 ms_handle_reset con 0x55b81d2acc00 session 0x55b81da2fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 22470656 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.525624275s of 10.878129005s, submitted: 80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 302 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81df681c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f76b0000/0x0/0x4ffc00000, data 0x3484af5/0x3638000, compress 0x0/0x0/0x0, omap 0x4bec9, meta 0x4ec4137), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158154752 unmapped: 22429696 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 302 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81d1d81c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158162944 unmapped: 22421504 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f76b3000/0x0/0x4ffc00000, data 0x3484b57/0x3639000, compress 0x0/0x0/0x0, omap 0x4bfd1, meta 0x4ec402f), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158203904 unmapped: 22380544 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2321464 data_alloc: 251658240 data_used: 28237955
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158212096 unmapped: 22372352 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 303 ms_handle_reset con 0x55b81b2b4000 session 0x55b81caeca80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 22347776 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 304 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b8bfdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 22331392 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f76a8000/0x0/0x4ffc00000, data 0x34881aa/0x363f000, compress 0x0/0x0/0x0, omap 0x4c58a, meta 0x4ec3a76), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158326784 unmapped: 22257664 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 305 ms_handle_reset con 0x55b81ceb5800 session 0x55b81da38c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f76a8000/0x0/0x4ffc00000, data 0x3489d9a/0x3642000, compress 0x0/0x0/0x0, omap 0x4c9a9, meta 0x4ec3657), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f76a8000/0x0/0x4ffc00000, data 0x3489d9a/0x3642000, compress 0x0/0x0/0x0, omap 0x4c9a9, meta 0x4ec3657), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158334976 unmapped: 22249472 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2329986 data_alloc: 251658240 data_used: 28238053
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 305 ms_handle_reset con 0x55b81d2acc00 session 0x55b81ac2b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f76a8000/0x0/0x4ffc00000, data 0x3489d9a/0x3642000, compress 0x0/0x0/0x0, omap 0x4c9a9, meta 0x4ec3657), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158375936 unmapped: 22208512 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 22192128 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.073101997s of 10.037608147s, submitted: 70
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 306 ms_handle_reset con 0x55b81b2b4000 session 0x55b81d4f2540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158392320 unmapped: 22192128 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 306 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b887dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 22175744 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 22175744 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2335478 data_alloc: 251658240 data_used: 28237955
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 22175744 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 307 heartbeat osd_stat(store_statfs(0x4f76a4000/0x0/0x4ffc00000, data 0x348b845/0x3646000, compress 0x0/0x0/0x0, omap 0x4cc4f, meta 0x4ec33b1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 22159360 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 307 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81b86f6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 307 ms_handle_reset con 0x55b81ceb5800 session 0x55b81df2ca80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158433280 unmapped: 22151168 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 307 ms_handle_reset con 0x55b81dfe7000 session 0x55b81b8bf880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158441472 unmapped: 22142976 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 308 ms_handle_reset con 0x55b81b7d5000 session 0x55b81df69dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 308 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81cef0c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 308 ms_handle_reset con 0x55b81ceb5800 session 0x55b81df69a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158474240 unmapped: 22110208 heap: 180584448 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 309 ms_handle_reset con 0x55b81b2b4000 session 0x55b81d4f2a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2351429 data_alloc: 251658240 data_used: 28238698
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f769c000/0x0/0x4ffc00000, data 0x348f027/0x364e000, compress 0x0/0x0/0x0, omap 0x4da20, meta 0x4ec25e0), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166903808 unmapped: 17883136 heap: 184786944 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 309 ms_handle_reset con 0x55b81dfe8000 session 0x55b81caedc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 42876928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.840897560s of 10.072703362s, submitted: 73
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 309 ms_handle_reset con 0x55b81b2b4000 session 0x55b81ac04c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 171294720 unmapped: 30285824 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 309 ms_handle_reset con 0x55b81ceb5800 session 0x55b81caece00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f269b000/0x0/0x4ffc00000, data 0x8490ba5/0x864f000, compress 0x0/0x0/0x0, omap 0x4e052, meta 0x4ec1fae), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162988032 unmapped: 38592512 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 310 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81da38000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 161865728 unmapped: 39714816 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 310 ms_handle_reset con 0x55b81ae88000 session 0x55b81a771dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 310 ms_handle_reset con 0x55b81b7d5000 session 0x55b81b7abc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3211625 data_alloc: 251658240 data_used: 31156545
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166166528 unmapped: 35414016 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81e0a3800 session 0x55b81a8016c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162086912 unmapped: 39493632 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b442c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 heartbeat osd_stat(store_statfs(0x4eb093000/0x0/0x4ffc00000, data 0xfa9555d/0xfc57000, compress 0x0/0x0/0x0, omap 0x4eb77, meta 0x4ec1489), peers [0,2] op hist [0,0,0,0,0,0,0,1,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166518784 unmapped: 35061760 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81ae88000 session 0x55b81d4f2000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81b2b4000 session 0x55b81aa70fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81ceb5800 session 0x55b81d4f2700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 171065344 unmapped: 30515200 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81ae88000 session 0x55b81d1376c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167092224 unmapped: 34488320 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004094 data_alloc: 251658240 data_used: 31050014
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81e0e1800 session 0x55b81b38b340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81d0adc00 session 0x55b81df696c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163020800 unmapped: 38559744 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81dfab340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 ms_handle_reset con 0x55b81e0a3800 session 0x55b81cef1c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81b2b4000 session 0x55b81df68380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163151872 unmapped: 38428672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.410922527s of 10.001376152s, submitted: 236
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81d1d8e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81ae88000 session 0x55b81b8bf6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81d0adc00 session 0x55b81b4421c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163258368 unmapped: 38322176 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81e0e1800 session 0x55b81b7aa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f7791000/0x0/0x4ffc00000, data 0x3399f1a/0x355a000, compress 0x0/0x0/0x0, omap 0x4fbe5, meta 0x4ec041b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 312 ms_handle_reset con 0x55b81ae88000 session 0x55b81b38a8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac04c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81e0e1800 session 0x55b81de1a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81b2b4000 session 0x55b81b443180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160243712 unmapped: 41336832 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81d0adc00 session 0x55b81da38c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 313 ms_handle_reset con 0x55b81ae88000 session 0x55b81da38000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 314 ms_handle_reset con 0x55b81d0adc00 session 0x55b81d371180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f7ac4000/0x0/0x4ffc00000, data 0x2b57c0f/0x2d17000, compress 0x0/0x0/0x0, omap 0x50583, meta 0x4ebfa7d), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 314 ms_handle_reset con 0x55b81b2b4000 session 0x55b81b8bf180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160268288 unmapped: 41312256 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2383543 data_alloc: 251658240 data_used: 27831667
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 41295872 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac048c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81e0e1800 session 0x55b81aa62380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 41222144 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 41222144 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81e0e4c00 session 0x55b81aa62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81e0e5400 session 0x55b81d4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81e0e1800 session 0x55b81caedc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81ae88000 session 0x55b81caec000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 53133312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148447232 unmapped: 53133312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2199405 data_alloc: 234881024 data_used: 9970524
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 heartbeat osd_stat(store_statfs(0x4f915e000/0x0/0x4ffc00000, data 0x19cb312/0x1b8e000, compress 0x0/0x0/0x0, omap 0x514f5, meta 0x4ebeb0b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81b2b4000 session 0x55b81ac4ea80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148201472 unmapped: 53379072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 ms_handle_reset con 0x55b81ae88000 session 0x55b81ac4f880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0e5400 session 0x55b81df681c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81da2f880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81d0adc00 session 0x55b81a800c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81dfe6c00 session 0x55b81df68000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f87b8000/0x0/0x4ffc00000, data 0x236ec09/0x2532000, compress 0x0/0x0/0x0, omap 0x51d16, meta 0x4ebe2ea), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2262318 data_alloc: 218103808 data_used: 7344476
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144801792 unmapped: 56778752 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81ae88000 session 0x55b81dfaa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81b8be380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81d0adc00 session 0x55b81b38bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0db800 session 0x55b81d4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.971589088s of 16.643096924s, submitted: 298
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0e5400 session 0x55b81d4f2a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81ae88000 session 0x55b81b86f340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81a8016c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bd400 session 0x55b81aa63880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81d0adc00 session 0x55b81dfaaa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0db800 session 0x55b81da39340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f821a000/0x0/0x4ffc00000, data 0x290cc94/0x2ad2000, compress 0x0/0x0/0x0, omap 0x51fec, meta 0x4ebe014), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81ae88000 session 0x55b81b8bec40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81ac4fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f821a000/0x0/0x4ffc00000, data 0x290cccd/0x2ad2000, compress 0x0/0x0/0x0, omap 0x5202e, meta 0x4ebdfd2), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2297489 data_alloc: 218103808 data_used: 6820188
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81b8bd400 session 0x55b81b442a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81d0adc00 session 0x55b81b38ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81e0e5400 session 0x55b81b8be000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2297489 data_alloc: 218103808 data_used: 6820188
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f821b000/0x0/0x4ffc00000, data 0x290cc6b/0x2ad1000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 ms_handle_reset con 0x55b81ae88000 session 0x55b81d7156c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144769024 unmapped: 56811520 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144777216 unmapped: 56803328 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2331044 data_alloc: 234881024 data_used: 11585884
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f81f0000/0x0/0x4ffc00000, data 0x2936c8e/0x2afc000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 56385536 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f81f0000/0x0/0x4ffc00000, data 0x2936c8e/0x2afc000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 56369152 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2331044 data_alloc: 234881024 data_used: 11585884
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 56369152 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 56369152 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.454809189s of 18.707763672s, submitted: 55
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f81f0000/0x0/0x4ffc00000, data 0x2936c8e/0x2afc000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147931136 unmapped: 53649408 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147505152 unmapped: 54075392 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f7d80000/0x0/0x4ffc00000, data 0x2da5c8e/0x2f6b000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2380002 data_alloc: 234881024 data_used: 12450140
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 54026240 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f7d5b000/0x0/0x4ffc00000, data 0x2dcac8e/0x2f90000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2380386 data_alloc: 234881024 data_used: 12454236
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147587072 unmapped: 53993472 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147587072 unmapped: 53993472 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.821453094s of 10.782118797s, submitted: 65
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147587072 unmapped: 53993472 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147587072 unmapped: 53993472 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f7d59000/0x0/0x4ffc00000, data 0x2dcdc8e/0x2f93000, compress 0x0/0x0/0x0, omap 0x521ba, meta 0x4ebde46), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 53731328 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 317 ms_handle_reset con 0x55b81a12d800 session 0x55b81d1d8e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 317 ms_handle_reset con 0x55b81a12c400 session 0x55b81b86f6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2388659 data_alloc: 234881024 data_used: 12470636
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147881984 unmapped: 53698560 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 318 ms_handle_reset con 0x55b81bb32400 session 0x55b81dfaba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147881984 unmapped: 53698560 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147947520 unmapped: 53633024 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 319 ms_handle_reset con 0x55b81d49c400 session 0x55b81de1bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f7d0d000/0x0/0x4ffc00000, data 0x2e13089/0x2fdf000, compress 0x0/0x0/0x0, omap 0x52ca2, meta 0x4ebd35e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147955712 unmapped: 53624832 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 319 ms_handle_reset con 0x55b81a12c400 session 0x55b81df68380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147980288 unmapped: 53600256 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 320 ms_handle_reset con 0x55b81a12d800 session 0x55b81da2e540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2398955 data_alloc: 234881024 data_used: 12471221
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 320 ms_handle_reset con 0x55b81ae88000 session 0x55b81ac4e380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 147988480 unmapped: 53592064 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 321 ms_handle_reset con 0x55b81bb32400 session 0x55b81de1bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 321 ms_handle_reset con 0x55b81d49d000 session 0x55b81b442c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148127744 unmapped: 53452800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148127744 unmapped: 53452800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.738044739s of 10.135429382s, submitted: 87
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81a12c400 session 0x55b81b38ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f7d07000/0x0/0x4ffc00000, data 0x2e167b5/0x2fe3000, compress 0x0/0x0/0x0, omap 0x53ad3, meta 0x4ebc52d), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148160512 unmapped: 53420032 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x2e183a5/0x2fe6000, compress 0x0/0x0/0x0, omap 0x53acf, meta 0x4ebc531), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f7d02000/0x0/0x4ffc00000, data 0x2e183a5/0x2fe6000, compress 0x0/0x0/0x0, omap 0x53acf, meta 0x4ebc531), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 53387264 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81d0adc00 session 0x55b81ac056c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81a12d800 session 0x55b81da39880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81cef0c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81b8bd400 session 0x55b81b4f2000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2402597 data_alloc: 234881024 data_used: 12471221
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 56197120 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 ms_handle_reset con 0x55b81b8bd400 session 0x55b81b38ac40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 322 handle_osd_map epochs [322,323], i have 323, src has [1,323]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 323 heartbeat osd_stat(store_statfs(0x4f8760000/0x0/0x4ffc00000, data 0x23bad9f/0x2588000, compress 0x0/0x0/0x0, omap 0x5441e, meta 0x4ebbbe2), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81a12c400 session 0x55b81caed180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81a12d800 session 0x55b81b443340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81da388c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 323 heartbeat osd_stat(store_statfs(0x4f8762000/0x0/0x4ffc00000, data 0x23badd2/0x258a000, compress 0x0/0x0/0x0, omap 0x5441e, meta 0x4ebbbe2), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2303138 data_alloc: 218103808 data_used: 7093632
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81bb32400 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145391616 unmapped: 56188928 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 323 ms_handle_reset con 0x55b81a12c400 session 0x55b81d4f2700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145424384 unmapped: 56156160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.591893196s of 10.222195625s, submitted: 113
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 324 ms_handle_reset con 0x55b81a12d800 session 0x55b81b8bf340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 145424384 unmapped: 56156160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 324 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81d1d88c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 325 ms_handle_reset con 0x55b81b8bd400 session 0x55b81d4f28c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 325 ms_handle_reset con 0x55b81d49d400 session 0x55b81ac2a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 325 ms_handle_reset con 0x55b81a12c400 session 0x55b81ac4e1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f92fc000/0x0/0x4ffc00000, data 0x181b54e/0x19ec000, compress 0x0/0x0/0x0, omap 0x5503e, meta 0x4ebafc2), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144474112 unmapped: 57106432 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2242967 data_alloc: 218103808 data_used: 7093578
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 57237504 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f92fc000/0x0/0x4ffc00000, data 0x181b54e/0x19ec000, compress 0x0/0x0/0x0, omap 0x5503e, meta 0x4ebafc2), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81a12d800 session 0x55b81caec700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81d0adc00 session 0x55b81d715a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81ae88000 session 0x55b81d4f36c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81b8bcc00 session 0x55b81de1b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f92fb000/0x0/0x4ffc00000, data 0x181d135/0x19ee000, compress 0x0/0x0/0x0, omap 0x557e3, meta 0x4eba81d), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 57180160 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81a12c400 session 0x55b81caedc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81a12d800 session 0x55b81aa62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2241151 data_alloc: 218103808 data_used: 6832874
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 57171968 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 ms_handle_reset con 0x55b81ae88000 session 0x55b81ac2bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81d0adc00 session 0x55b81d370fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81b8bd400 session 0x55b81ac4e540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12c400 session 0x55b81df696c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12d800 session 0x55b81b887a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 57827328 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81ae88000 session 0x55b81d370c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143622144 unmapped: 57958400 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f87b9000/0x0/0x4ffc00000, data 0x2361b51/0x2533000, compress 0x0/0x0/0x0, omap 0x55de0, meta 0x4eba220), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.735248566s of 10.000020027s, submitted: 105
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81d0adc00 session 0x55b81a771340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81b8b7800 session 0x55b81de1a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 57819136 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12c400 session 0x55b81de1bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 57819136 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f87b9000/0x0/0x4ffc00000, data 0x2361b51/0x2533000, compress 0x0/0x0/0x0, omap 0x55de0, meta 0x4eba220), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2314965 data_alloc: 218103808 data_used: 6837143
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 57819136 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12d800 session 0x55b81b8bf180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 57819136 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81ae88000 session 0x55b81d7156c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81d0adc00 session 0x55b81d371180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81b8b7400 session 0x55b81b442a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12c400 session 0x55b81caec700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81a12d800 session 0x55b81b4f3dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81ae88000 session 0x55b81a800540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 ms_handle_reset con 0x55b81b8b7400 session 0x55b81b8bf880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 56647680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f80d7000/0x0/0x4ffc00000, data 0x2a40be6/0x2c15000, compress 0x0/0x0/0x0, omap 0x55f72, meta 0x4eba08e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 144932864 unmapped: 56647680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81e0a5000 session 0x55b81b86e700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2436351 data_alloc: 234881024 data_used: 18606487
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f80d2000/0x0/0x4ffc00000, data 0x2a42782/0x2c18000, compress 0x0/0x0/0x0, omap 0x56088, meta 0x4eb9f78), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 49692672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12c400 session 0x55b81aa63a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.050866127s of 11.244613647s, submitted: 45
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12d800 session 0x55b81ac05dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 156082176 unmapped: 45498368 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81ae88000 session 0x55b81b38a000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b1000 session 0x55b81d231180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81b8b7400 session 0x55b81d4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b0400 session 0x55b81ac2a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2610366 data_alloc: 234881024 data_used: 18606487
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152076288 unmapped: 49504256 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f60d3000/0x0/0x4ffc00000, data 0x4a427e4/0x4c19000, compress 0x0/0x0/0x0, omap 0x56634, meta 0x4eb99cc), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152109056 unmapped: 49471488 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f60d3000/0x0/0x4ffc00000, data 0x4a427e4/0x4c19000, compress 0x0/0x0/0x0, omap 0x56634, meta 0x4eb99cc), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 152109056 unmapped: 49471488 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160849920 unmapped: 40730624 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12c400 session 0x55b81ac05180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 161832960 unmapped: 39747584 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12d800 session 0x55b81ab5ce00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2702600 data_alloc: 234881024 data_used: 19549079
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160407552 unmapped: 41172992 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81ae88000 session 0x55b81aa62380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f5339000/0x0/0x4ffc00000, data 0x57dc7e4/0x59b3000, compress 0x0/0x0/0x0, omap 0x56634, meta 0x4eb99cc), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81b8b7400 session 0x55b81d370700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160415744 unmapped: 41164800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 160587776 unmapped: 40992768 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166952960 unmapped: 34627584 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81ae88000 session 0x55b81b38b340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12c400 session 0x55b81dfaba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12d800 session 0x55b81de1a540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166952960 unmapped: 34627584 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.234261513s of 10.919887543s, submitted: 137
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b0400 session 0x55b81dfaa540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2747274 data_alloc: 234881024 data_used: 26745751
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167264256 unmapped: 34316288 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b1000 session 0x55b81ac4e540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12c400 session 0x55b81aa62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f5316000/0x0/0x4ffc00000, data 0x57fe7e4/0x59d5000, compress 0x0/0x0/0x0, omap 0x5671f, meta 0x4eb98e1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167436288 unmapped: 34144256 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81ae88000 session 0x55b81caec380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81a12d800 session 0x55b81b38ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d4b0400 session 0x55b81de1aa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 33832960 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 ms_handle_reset con 0x55b81d243c00 session 0x55b81da2e700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 33701888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 329 ms_handle_reset con 0x55b81d243c00 session 0x55b81b38b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 167878656 unmapped: 33701888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2756034 data_alloc: 234881024 data_used: 26915700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 330 ms_handle_reset con 0x55b81a12d800 session 0x55b81d370380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 330 heartbeat osd_stat(store_statfs(0x4f52ee000/0x0/0x4ffc00000, data 0x5824381/0x59fc000, compress 0x0/0x0/0x0, omap 0x568fb, meta 0x4eb9705), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 170696704 unmapped: 30883840 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 330 ms_handle_reset con 0x55b81ae88000 session 0x55b81ac2b180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 330 ms_handle_reset con 0x55b81d4b0400 session 0x55b81caedc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 331 ms_handle_reset con 0x55b81d242800 session 0x55b81aa71500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169697280 unmapped: 31883264 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 331 ms_handle_reset con 0x55b81d242800 session 0x55b81caec000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169902080 unmapped: 31678464 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 332 ms_handle_reset con 0x55b81a12d800 session 0x55b81aa70c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169918464 unmapped: 31662080 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 332 ms_handle_reset con 0x55b81ae88000 session 0x55b81cb93c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 332 ms_handle_reset con 0x55b81d4b0400 session 0x55b81df2ca80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 333 ms_handle_reset con 0x55b81d243c00 session 0x55b81d4f3dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 333 ms_handle_reset con 0x55b81dfe7c00 session 0x55b81da2fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 168656896 unmapped: 32923648 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.799221992s of 10.074870110s, submitted: 176
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 333 ms_handle_reset con 0x55b81a12d800 session 0x55b81a801880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2715384 data_alloc: 234881024 data_used: 24393076
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 334 ms_handle_reset con 0x55b81ae88000 session 0x55b81aa636c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 168665088 unmapped: 32915456 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81d242800 session 0x55b81d4f2540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f59ae000/0x0/0x4ffc00000, data 0x515c8c8/0x533a000, compress 0x0/0x0/0x0, omap 0x58535, meta 0x4eb7acb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 168697856 unmapped: 32882688 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81a12c400 session 0x55b81ab5ce00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81b8b6c00 session 0x55b81b38b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81d0adc00 session 0x55b81aa71180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 159563776 unmapped: 42016768 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81a12d800 session 0x55b81de1ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81ae88000 session 0x55b81de1a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81d242800 session 0x55b81d4f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81d242800 session 0x55b81dfaaa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 ms_handle_reset con 0x55b81dfe6000 session 0x55b81dfaa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 157982720 unmapped: 43597824 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173514752 unmapped: 28065792 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2582725 data_alloc: 234881024 data_used: 11664209
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166223872 unmapped: 35356672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 336 heartbeat osd_stat(store_statfs(0x4f563c000/0x0/0x4ffc00000, data 0x43238f7/0x4500000, compress 0x0/0x0/0x0, omap 0x58e30, meta 0x60571d0), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 336 ms_handle_reset con 0x55b81e0a3800 session 0x55b81b8bf6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166535168 unmapped: 35045376 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 166535168 unmapped: 35045376 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 336 handle_osd_map epochs [336,337], i have 336, src has [1,337]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81de2d000 session 0x55b81ac2ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163782656 unmapped: 37797888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81dc13400 session 0x55b81de1ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81dc12800 session 0x55b81ac2b180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163782656 unmapped: 37797888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.958495140s of 10.005329132s, submitted: 242
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81d242800 session 0x55b81ac4e8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 337 heartbeat osd_stat(store_statfs(0x4f4bfa000/0x0/0x4ffc00000, data 0x4d70f6a/0x4f50000, compress 0x0/0x0/0x0, omap 0x597c1, meta 0x605683f), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2650452 data_alloc: 234881024 data_used: 11737937
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 337 ms_handle_reset con 0x55b81de2d000 session 0x55b81cef1dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163782656 unmapped: 37797888 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 338 heartbeat osd_stat(store_statfs(0x4f4bf8000/0x0/0x4ffc00000, data 0x4d7101c/0x4f52000, compress 0x0/0x0/0x0, omap 0x59c86, meta 0x605637a), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 338 ms_handle_reset con 0x55b81dfe6000 session 0x55b81da38540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163807232 unmapped: 37773312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 338 ms_handle_reset con 0x55b81e0a3800 session 0x55b81dfaba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163815424 unmapped: 37765120 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81e0a3800 session 0x55b81df2ca80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f4bf4000/0x0/0x4ffc00000, data 0x4d745f9/0x4f56000, compress 0x0/0x0/0x0, omap 0x5a957, meta 0x60556a9), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81dc12800 session 0x55b81d4f2700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d242800 session 0x55b81ac041c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163831808 unmapped: 37748736 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81de2d000 session 0x55b81d230c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d243000 session 0x55b81d231180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d242000 session 0x55b81d136000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d242800 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81dc12800 session 0x55b81df68e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 37666816 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81de2d000 session 0x55b81cb93c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2646478 data_alloc: 234881024 data_used: 11645777
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81e0a3800 session 0x55b81d4f2540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 ms_handle_reset con 0x55b81d242000 session 0x55b81b38b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 37666816 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f4c1b000/0x0/0x4ffc00000, data 0x4d505ea/0x4f31000, compress 0x0/0x0/0x0, omap 0x5ab49, meta 0x60554b7), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163913728 unmapped: 37666816 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81de2d000 session 0x55b81da38000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 165101568 unmapped: 36478976 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81dc13800 session 0x55b81ac05dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81dfe6000 session 0x55b81dddda40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 165109760 unmapped: 36470784 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 340 heartbeat osd_stat(store_statfs(0x4f4c14000/0x0/0x4ffc00000, data 0x4d52266/0x4f36000, compress 0x0/0x0/0x0, omap 0x5b920, meta 0x60546e0), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164716544 unmapped: 36864000 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81dc12c00 session 0x55b81df68380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.531381607s of 10.002674103s, submitted: 119
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 340 ms_handle_reset con 0x55b81d242000 session 0x55b81ac4f6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2715559 data_alloc: 234881024 data_used: 21496246
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 340 handle_osd_map epochs [340,341], i have 341, src has [1,341]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164675584 unmapped: 36904960 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 341 ms_handle_reset con 0x55b81de2d000 session 0x55b81df69a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81dfe6000 session 0x55b81b86f6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164683776 unmapped: 36896768 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81dc13c00 session 0x55b81de1bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81dc12000 session 0x55b81caecc40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 36880384 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f4c0e000/0x0/0x4ffc00000, data 0x4d55811/0x4f3a000, compress 0x0/0x0/0x0, omap 0x5c5fb, meta 0x6053a05), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81d242000 session 0x55b81d094c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164782080 unmapped: 36798464 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 ms_handle_reset con 0x55b81dc13c00 session 0x55b81d370380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f4c0e000/0x0/0x4ffc00000, data 0x4d55811/0x4f3a000, compress 0x0/0x0/0x0, omap 0x5c5fb, meta 0x6053a05), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164790272 unmapped: 36790272 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718561 data_alloc: 234881024 data_used: 21496148
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164790272 unmapped: 36790272 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172425216 unmapped: 29155328 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f4a75000/0x0/0x4ffc00000, data 0x4ef37af/0x50d7000, compress 0x0/0x0/0x0, omap 0x5c367, meta 0x6053c99), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172523520 unmapped: 29057024 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f44ef000/0x0/0x4ffc00000, data 0x54797af/0x565d000, compress 0x0/0x0/0x0, omap 0x5c367, meta 0x6053c99), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2778079 data_alloc: 234881024 data_used: 22271316
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.351642609s of 10.747139931s, submitted: 192
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f44ea000/0x0/0x4ffc00000, data 0x547b22e/0x5660000, compress 0x0/0x0/0x0, omap 0x5c501, meta 0x6053aff), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 343 ms_handle_reset con 0x55b81de2d000 session 0x55b81df636c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172580864 unmapped: 28999680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 343 ms_handle_reset con 0x55b81dfe6000 session 0x55b81ac2b340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783380 data_alloc: 234881024 data_used: 22271316
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172589056 unmapped: 28991488 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 343 ms_handle_reset con 0x55b81b889c00 session 0x55b81d4f28c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172777472 unmapped: 28803072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 344 ms_handle_reset con 0x55b81b889c00 session 0x55b81b8bf180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 28794880 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 345 ms_handle_reset con 0x55b81d242000 session 0x55b81b886000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 28794880 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 345 ms_handle_reset con 0x55b81d1f8000 session 0x55b81ac048c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 28794880 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 346 heartbeat osd_stat(store_statfs(0x4f44e0000/0x0/0x4ffc00000, data 0x54809d8/0x566a000, compress 0x0/0x0/0x0, omap 0x5d077, meta 0x6052f89), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 346 ms_handle_reset con 0x55b81dc13c00 session 0x55b81ac05dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2796399 data_alloc: 234881024 data_used: 22382420
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 28786688 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.081931114s of 10.227938652s, submitted: 53
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 346 ms_handle_reset con 0x55b81de2d000 session 0x55b81cb93c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 28770304 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172810240 unmapped: 28770304 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 346 ms_handle_reset con 0x55b81d1f8000 session 0x55b81a7708c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 347 ms_handle_reset con 0x55b81d242000 session 0x55b81df68fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 347 heartbeat osd_stat(store_statfs(0x4f44df000/0x0/0x4ffc00000, data 0x5483566/0x566d000, compress 0x0/0x0/0x0, omap 0x5d7cd, meta 0x6052833), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172826624 unmapped: 28753920 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 348 ms_handle_reset con 0x55b81dc13c00 session 0x55b81d715c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 348 ms_handle_reset con 0x55b81b889c00 session 0x55b81da38540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172834816 unmapped: 28745728 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f44d5000/0x0/0x4ffc00000, data 0x5486d9a/0x5675000, compress 0x0/0x0/0x0, omap 0x5df33, meta 0x60520cd), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2808395 data_alloc: 234881024 data_used: 22382420
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172834816 unmapped: 28745728 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f44d5000/0x0/0x4ffc00000, data 0x5486d9a/0x5675000, compress 0x0/0x0/0x0, omap 0x5df33, meta 0x60520cd), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172834816 unmapped: 28745728 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 349 heartbeat osd_stat(store_statfs(0x4f44d8000/0x0/0x4ffc00000, data 0x5486d38/0x5674000, compress 0x0/0x0/0x0, omap 0x5e0f8, meta 0x6051f08), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 349 ms_handle_reset con 0x55b81dfe6000 session 0x55b81caeda40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172900352 unmapped: 28680192 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173015040 unmapped: 28565504 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 349 ms_handle_reset con 0x55b81dfe6000 session 0x55b81df056c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173015040 unmapped: 28565504 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2825658 data_alloc: 234881024 data_used: 23587328
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 350 heartbeat osd_stat(store_statfs(0x4f44d2000/0x0/0x4ffc00000, data 0x548898a/0x5678000, compress 0x0/0x0/0x0, omap 0x5e471, meta 0x6051b8f), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173023232 unmapped: 28557312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.445651054s of 10.584459305s, submitted: 81
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 350 ms_handle_reset con 0x55b81d1f8000 session 0x55b81b442e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173023232 unmapped: 28557312 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 351 ms_handle_reset con 0x55b81d242000 session 0x55b81de1b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 351 ms_handle_reset con 0x55b81de50400 session 0x55b81b886e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f44cd000/0x0/0x4ffc00000, data 0x548a47b/0x567d000, compress 0x0/0x0/0x0, omap 0x5ec0f, meta 0x60513f1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173088768 unmapped: 28491776 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 351 handle_osd_map epochs [352,352], i have 352, src has [1,352]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 352 ms_handle_reset con 0x55b81dc13c00 session 0x55b81aa71180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 352 ms_handle_reset con 0x55b81de4f400 session 0x55b81d715880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 352 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac04e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173113344 unmapped: 28467200 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 352 ms_handle_reset con 0x55b81d242000 session 0x55b81ac2aa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173113344 unmapped: 28467200 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 353 ms_handle_reset con 0x55b81de50400 session 0x55b81dfaa000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2851826 data_alloc: 234881024 data_used: 23583477
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172924928 unmapped: 28655616 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 354 ms_handle_reset con 0x55b81dfe6000 session 0x55b81b886a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 354 ms_handle_reset con 0x55b81b889c00 session 0x55b81d370c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 354 ms_handle_reset con 0x55b81d1f8000 session 0x55b81da2f880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 28631040 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f44bb000/0x0/0x4ffc00000, data 0x5491a85/0x568f000, compress 0x0/0x0/0x0, omap 0x5fa6a, meta 0x6050596), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 28631040 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f44bb000/0x0/0x4ffc00000, data 0x5491a85/0x568f000, compress 0x0/0x0/0x0, omap 0x5fa6a, meta 0x6050596), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 354 ms_handle_reset con 0x55b81de4f400 session 0x55b81b4f3180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 27525120 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 354 handle_osd_map epochs [354,355], i have 355, src has [1,355]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 355 ms_handle_reset con 0x55b81dfe9800 session 0x55b81b887500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173973504 unmapped: 27607040 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81de50400 session 0x55b81ac4e540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81b889c00 session 0x55b81a8008c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81d242000 session 0x55b81aa63c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2891557 data_alloc: 234881024 data_used: 23587589
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 27566080 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 356 heartbeat osd_stat(store_statfs(0x4f44ad000/0x0/0x4ffc00000, data 0x56c5772/0x569b000, compress 0x0/0x0/0x0, omap 0x6002a, meta 0x604ffd6), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81de4f400 session 0x55b81b8bea80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.216485977s of 10.411639214s, submitted: 68
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 27566080 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81dfe9800 session 0x55b81da39dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 356 ms_handle_reset con 0x55b81de50400 session 0x55b81a800700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 357 ms_handle_reset con 0x55b81b889c00 session 0x55b81ab5d180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174071808 unmapped: 27508736 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 358 ms_handle_reset con 0x55b81d242000 session 0x55b81caec000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 358 ms_handle_reset con 0x55b81de4f400 session 0x55b81d4f2380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 358 ms_handle_reset con 0x55b81d1f8000 session 0x55b81b86e8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 27500544 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174088192 unmapped: 27492352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2899770 data_alloc: 234881024 data_used: 23587687
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174112768 unmapped: 27467776 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 359 ms_handle_reset con 0x55b81de50400 session 0x55b8190f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 359 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac04c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 359 ms_handle_reset con 0x55b81d1f8000 session 0x55b81b38a000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174120960 unmapped: 27459584 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f44a7000/0x0/0x4ffc00000, data 0x56caa38/0x56a3000, compress 0x0/0x0/0x0, omap 0x60eac, meta 0x604f154), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 359 ms_handle_reset con 0x55b81de4f400 session 0x55b81d715a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 360 ms_handle_reset con 0x55b81d242000 session 0x55b81ced4700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 360 ms_handle_reset con 0x55b81dfe9800 session 0x55b81d4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174170112 unmapped: 27410432 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 360 ms_handle_reset con 0x55b81b889c00 session 0x55b81b7aba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 360 ms_handle_reset con 0x55b81d1f8000 session 0x55b81d4f28c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173875200 unmapped: 27705344 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f447d000/0x0/0x4ffc00000, data 0x56f6137/0x56cf000, compress 0x0/0x0/0x0, omap 0x613d2, meta 0x604ec2e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173957120 unmapped: 27623424 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 361 ms_handle_reset con 0x55b81d6db800 session 0x55b81df68c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 361 ms_handle_reset con 0x55b81ceb5c00 session 0x55b81d714700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908949 data_alloc: 234881024 data_used: 23683431
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174145536 unmapped: 27435008 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 362 ms_handle_reset con 0x55b81b428800 session 0x55b81de1a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 362 heartbeat osd_stat(store_statfs(0x4f4476000/0x0/0x4ffc00000, data 0x56f93b4/0x56d2000, compress 0x0/0x0/0x0, omap 0x61efc, meta 0x604e104), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.309225082s of 10.023455620s, submitted: 168
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174161920 unmapped: 27418624 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 27385856 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174268416 unmapped: 27312128 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 365 ms_handle_reset con 0x55b81b8bd800 session 0x55b81d370000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 365 ms_handle_reset con 0x55b81d0acc00 session 0x55b81aa63340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 365 ms_handle_reset con 0x55b81b889c00 session 0x55b81df696c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f446e000/0x0/0x4ffc00000, data 0x56fe6d2/0x56d8000, compress 0x0/0x0/0x0, omap 0x62857, meta 0x604d7a9), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 366 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f3500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174317568 unmapped: 27262976 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 366 ms_handle_reset con 0x55b81ceb5c00 session 0x55b81d4f3dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2938222 data_alloc: 234881024 data_used: 23818488
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 366 heartbeat osd_stat(store_statfs(0x4f4455000/0x0/0x4ffc00000, data 0x57472ee/0x56f5000, compress 0x0/0x0/0x0, omap 0x62de4, meta 0x604d21c), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174317568 unmapped: 27262976 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f4456000/0x0/0x4ffc00000, data 0x574728c/0x56f4000, compress 0x0/0x0/0x0, omap 0x62e6a, meta 0x604d196), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174325760 unmapped: 27254784 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 368 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bf500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 368 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac2b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175423488 unmapped: 26157056 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 369 ms_handle_reset con 0x55b81b8bd800 session 0x55b81b86fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 369 ms_handle_reset con 0x55b81d0acc00 session 0x55b81ac2b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 26140672 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 22945792 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 369 ms_handle_reset con 0x55b81d1f8000 session 0x55b81dfaa1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 369 ms_handle_reset con 0x55b81b428800 session 0x55b81b7aa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2713414 data_alloc: 234881024 data_used: 18809864
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f6b49000/0x0/0x4ffc00000, data 0x305100c/0x3001000, compress 0x0/0x0/0x0, omap 0x646ed, meta 0x604b913), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172720128 unmapped: 28860416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.742566109s of 13.166366577s, submitted: 235
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2725704 data_alloc: 234881024 data_used: 19219464
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 370 ms_handle_reset con 0x55b81b889c00 session 0x55b81b886e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 370 ms_handle_reset con 0x55b81b8bd800 session 0x55b81d0956c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173252608 unmapped: 28327936 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f6adb000/0x0/0x4ffc00000, data 0x30be08e/0x3071000, compress 0x0/0x0/0x0, omap 0x647f9, meta 0x604b807), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172777472 unmapped: 28803072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d0acc00 session 0x55b81df68540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d6db800 session 0x55b81b7aaa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172777472 unmapped: 28803072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b428800 session 0x55b81da2f340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac2b180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b8bd800 session 0x55b81da38540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172777472 unmapped: 28803072 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d0acc00 session 0x55b81df636c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b429000 session 0x55b81ac4e8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b428800 session 0x55b81df68380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172916736 unmapped: 28663808 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b889c00 session 0x55b81b4f2a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2731675 data_alloc: 234881024 data_used: 20661237
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 26615808 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b8bd800 session 0x55b81b86fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 26615808 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad6000/0x0/0x4ffc00000, data 0x30c1b0d/0x3076000, compress 0x0/0x0/0x0, omap 0x64f35, meta 0x604b0cb), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d0acc00 session 0x55b81b7aba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174972928 unmapped: 26607616 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81e0db000 session 0x55b81b886e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b428800 session 0x55b81da2f340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175104000 unmapped: 26476544 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b889c00 session 0x55b81caecfc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175104000 unmapped: 26476544 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.099312782s of 10.017537117s, submitted: 66
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b8bd800 session 0x55b81aa71500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2729329 data_alloc: 234881024 data_used: 20661237
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 26468352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 26468352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad9000/0x0/0x4ffc00000, data 0x30c1a8b/0x3073000, compress 0x0/0x0/0x0, omap 0x65574, meta 0x604aa8c), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 26468352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d0acc00 session 0x55b81a771880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175112192 unmapped: 26468352 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175611904 unmapped: 25968640 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad9000/0x0/0x4ffc00000, data 0x30c1a8b/0x3073000, compress 0x0/0x0/0x0, omap 0x65574, meta 0x604aa8c), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2736838 data_alloc: 234881024 data_used: 22393845
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175611904 unmapped: 25968640 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad9000/0x0/0x4ffc00000, data 0x30c1a8b/0x3073000, compress 0x0/0x0/0x0, omap 0x65574, meta 0x604aa8c), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d94fc00 session 0x55b81b7aa700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 25903104 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175677440 unmapped: 25903104 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81d242000 session 0x55b81ac2a000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81de4f400 session 0x55b81b8861c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f6ad9000/0x0/0x4ffc00000, data 0x30c1a8b/0x3073000, compress 0x0/0x0/0x0, omap 0x657a2, meta 0x604a85e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 ms_handle_reset con 0x55b81b889c00 session 0x55b81b4421c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175702016 unmapped: 25878528 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81b428800 session 0x55b81b86e8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 25821184 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81b8bd800 session 0x55b81de1a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2729987 data_alloc: 234881024 data_used: 22264821
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.530011177s of 10.641107559s, submitted: 55
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bf500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81d242000 session 0x55b81b7aaa80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 372 ms_handle_reset con 0x55b81b889c00 session 0x55b81b443880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175775744 unmapped: 25804800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175775744 unmapped: 25804800 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 373 ms_handle_reset con 0x55b81de4f400 session 0x55b81ac2afc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175792128 unmapped: 25788416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 373 ms_handle_reset con 0x55b81d0acc00 session 0x55b8190f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 373 heartbeat osd_stat(store_statfs(0x4f6ebc000/0x0/0x4ffc00000, data 0x2cda1c3/0x2c8e000, compress 0x0/0x0/0x0, omap 0x66841, meta 0x60497bf), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175792128 unmapped: 25788416 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81d242000 session 0x55b81df62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81b889c00 session 0x55b81b38a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81b428800 session 0x55b81caeda40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173539328 unmapped: 28041216 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81d242800 session 0x55b81d1d88c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81dc12800 session 0x55b81ac4f880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2682980 data_alloc: 234881024 data_used: 19340179
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 374 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac05dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173604864 unmapped: 27975680 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b428800 session 0x55b81ac04540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f6ed7000/0x0/0x4ffc00000, data 0x2a65db3/0x2c75000, compress 0x0/0x0/0x0, omap 0x674c0, meta 0x6048b40), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173637632 unmapped: 27942912 heap: 201580544 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242000 session 0x55b81aa70c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242800 session 0x55b81b8be380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81de4f400 session 0x55b81b38bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b428800 session 0x55b81de1ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173260800 unmapped: 32522240 heap: 205783040 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242000 session 0x55b81d714380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242800 session 0x55b81ac2b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b889c00 session 0x55b81d095340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81cf18000 session 0x55b81ac4e540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 45023232 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bf180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b889c00 session 0x55b81b442e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 45293568 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2632346 data_alloc: 218103808 data_used: 6454163
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242000 session 0x55b81b86e8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f650b000/0x0/0x4ffc00000, data 0x3431909/0x3641000, compress 0x0/0x0/0x0, omap 0x68669, meta 0x6047997), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.423376083s of 10.308360100s, submitted: 193
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81d242800 session 0x55b81d393880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 45293568 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f650b000/0x0/0x4ffc00000, data 0x3431909/0x3641000, compress 0x0/0x0/0x0, omap 0x68669, meta 0x6047997), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 45293568 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81de8a000 session 0x55b81a800000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 ms_handle_reset con 0x55b81b889c00 session 0x55b81b886a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81d242000 session 0x55b81d4f3dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81d13c800 session 0x55b8190f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81b428800 session 0x55b81caeda40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163561472 unmapped: 46424064 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81d242800 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 376 ms_handle_reset con 0x55b81b889c00 session 0x55b81df2ca80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163569664 unmapped: 46415872 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 376 handle_osd_map epochs [376,377], i have 377, src has [1,377]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 377 ms_handle_reset con 0x55b81d13c800 session 0x55b81d3716c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 377 ms_handle_reset con 0x55b81b428800 session 0x55b81dfabc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 377 heartbeat osd_stat(store_statfs(0x4f6508000/0x0/0x4ffc00000, data 0x3433684/0x3642000, compress 0x0/0x0/0x0, omap 0x68d39, meta 0x60472c7), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163602432 unmapped: 46383104 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2636069 data_alloc: 218103808 data_used: 6454133
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 378 ms_handle_reset con 0x55b81d242000 session 0x55b81b442540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 46366720 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 378 ms_handle_reset con 0x55b81b889800 session 0x55b81b8bf500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163618816 unmapped: 46366720 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 378 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163627008 unmapped: 46358528 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81b889c00 session 0x55b81d715dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163627008 unmapped: 46358528 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81d13c800 session 0x55b81a771880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81d242000 session 0x55b81d715880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81b8b7c00 session 0x55b81b443880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162848768 unmapped: 47136768 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f64fd000/0x0/0x4ffc00000, data 0x34389b0/0x364d000, compress 0x0/0x0/0x0, omap 0x69d6c, meta 0x6046294), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81d13c800 session 0x55b81de1b180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 379 ms_handle_reset con 0x55b81b889c00 session 0x55b81d095dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2647078 data_alloc: 218103808 data_used: 6847349
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.696597099s of 10.054694176s, submitted: 176
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 162848768 unmapped: 47136768 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 380 ms_handle_reset con 0x55b81d242000 session 0x55b81df68540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f64fd000/0x0/0x4ffc00000, data 0x34389b0/0x364d000, compress 0x0/0x0/0x0, omap 0x69df2, meta 0x604620e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81dfe3000 session 0x55b81d095c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163905536 unmapped: 46080000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 163905536 unmapped: 46080000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81b2b4400 session 0x55b81a801a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81e0a3400 session 0x55b81a800a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81b889c00 session 0x55b81de1ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81d13c800 session 0x55b81df68c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81d242000 session 0x55b81d715340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164773888 unmapped: 45211648 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81dfe3000 session 0x55b81df696c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81b889c00 session 0x55b81d714540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 ms_handle_reset con 0x55b81d13c800 session 0x55b81df68fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164782080 unmapped: 45203456 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f5f78000/0x0/0x4ffc00000, data 0x39bb075/0x3bd4000, compress 0x0/0x0/0x0, omap 0x6a6ed, meta 0x6045913), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2739680 data_alloc: 234881024 data_used: 14774304
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 382 ms_handle_reset con 0x55b81d242000 session 0x55b81ac04380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 382 ms_handle_reset con 0x55b81e0a3400 session 0x55b81b4f3500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164790272 unmapped: 45195264 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164790272 unmapped: 45195264 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 382 ms_handle_reset con 0x55b81de8b400 session 0x55b81d095a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 382 ms_handle_reset con 0x55b81b889c00 session 0x55b81ac041c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81d13c800 session 0x55b81b4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164798464 unmapped: 45187072 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81de8b400 session 0x55b81b8861c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164839424 unmapped: 45146112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81d242000 session 0x55b81de1ae00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81de8a800 session 0x55b81cef01c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 383 ms_handle_reset con 0x55b81b889c00 session 0x55b81cef1500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164839424 unmapped: 45146112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81e0a3400 session 0x55b81a801a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2746360 data_alloc: 234881024 data_used: 14774206
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81d13c800 session 0x55b81df2ca80
Jan 31 00:14:20 np0005603435 ceph-mon[75307]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 31 00:14:20 np0005603435 ceph-mon[75307]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439346662' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.840722084s of 10.014692307s, submitted: 90
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81d242000 session 0x55b81caeda40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 164839424 unmapped: 45146112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f5f6f000/0x0/0x4ffc00000, data 0x39c0210/0x3bdb000, compress 0x0/0x0/0x0, omap 0x6afe2, meta 0x604501e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 170393600 unmapped: 39591936 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81de8b400 session 0x55b81d095340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 168394752 unmapped: 41590784 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169025536 unmapped: 40960000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f53b7000/0x0/0x4ffc00000, data 0x4579220/0x4795000, compress 0x0/0x0/0x0, omap 0x6b0ee, meta 0x6044f12), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 ms_handle_reset con 0x55b81b889c00 session 0x55b81b4421c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f53b7000/0x0/0x4ffc00000, data 0x4579220/0x4795000, compress 0x0/0x0/0x0, omap 0x6b0ee, meta 0x6044f12), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 385 ms_handle_reset con 0x55b81de8b400 session 0x55b81d095180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169328640 unmapped: 40656896 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2824410 data_alloc: 234881024 data_used: 14857166
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 40345600 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f538b000/0x0/0x4ffc00000, data 0x45a0857/0x47bf000, compress 0x0/0x0/0x0, omap 0x6b797, meta 0x6044869), peers [0,2] op hist [1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81cf1a000 session 0x55b81d1d81c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81e0a3400 session 0x55b81dddc380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f538b000/0x0/0x4ffc00000, data 0x45a0857/0x47bf000, compress 0x0/0x0/0x0, omap 0x6b797, meta 0x6044869), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81dd5b800 session 0x55b81b442a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81b889c00 session 0x55b81aa63180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81cf1a000 session 0x55b81d4f2540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 ms_handle_reset con 0x55b81e0a3400 session 0x55b81b7aa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f538a000/0x0/0x4ffc00000, data 0x45a08b9/0x47c0000, compress 0x0/0x0/0x0, omap 0x6bb11, meta 0x60444ef), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 387 ms_handle_reset con 0x55b81de2d800 session 0x55b81a771500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 387 ms_handle_reset con 0x55b81de8b400 session 0x55b81da38c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2857856 data_alloc: 234881024 data_used: 19390000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.529183388s of 10.785122871s, submitted: 102
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 387 ms_handle_reset con 0x55b81de8b400 session 0x55b81d0941c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 37896192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172113920 unmapped: 37871616 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81b889c00 session 0x55b81d094a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81b428800 session 0x55b81de1b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81cf1a000 session 0x55b81b887dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172171264 unmapped: 37814272 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81de2d800 session 0x55b81df68000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81de2d800 session 0x55b81d094380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173367296 unmapped: 36618240 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81b428800 session 0x55b81cef1340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f5385000/0x0/0x4ffc00000, data 0x45a4102/0x47c7000, compress 0x0/0x0/0x0, omap 0x6c8ae, meta 0x6043752), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 ms_handle_reset con 0x55b81b889c00 session 0x55b81d094380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886268 data_alloc: 234881024 data_used: 19543322
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 388 handle_osd_map epochs [388,389], i have 389, src has [1,389]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173383680 unmapped: 36601856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 389 ms_handle_reset con 0x55b81cf1a000 session 0x55b81a771500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 389 ms_handle_reset con 0x55b81de8b400 session 0x55b81a801a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2890210 data_alloc: 234881024 data_used: 19539128
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f5233000/0x0/0x4ffc00000, data 0x46e9b1f/0x490d000, compress 0x0/0x0/0x0, omap 0x6cc07, meta 0x60433f9), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922306061s of 10.085712433s, submitted: 109
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f5233000/0x0/0x4ffc00000, data 0x46e9b1f/0x490d000, compress 0x0/0x0/0x0, omap 0x6cc07, meta 0x60433f9), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 173449216 unmapped: 36536320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 37199872 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81de8b400 session 0x55b81d393880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d13c800 session 0x55b81caed340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d242000 session 0x55b81dddd6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172785664 unmapped: 37199872 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2884517 data_alloc: 234881024 data_used: 19434699
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81b428800 session 0x55b81de1b6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172793856 unmapped: 37191680 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81b889c00 session 0x55b81df68000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d13c800 session 0x55b81dfaa8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81b428800 session 0x55b81ac2a000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d242000 session 0x55b81b7aa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81de8b400 session 0x55b81b8876c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172630016 unmapped: 37355520 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f525e000/0x0/0x4ffc00000, data 0x46c770d/0x48ee000, compress 0x0/0x0/0x0, omap 0x6da29, meta 0x60425d7), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81cf1a000 session 0x55b81caedc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172646400 unmapped: 37339136 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f525e000/0x0/0x4ffc00000, data 0x46c76bb/0x48ec000, compress 0x0/0x0/0x0, omap 0x6d920, meta 0x60426e0), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 ms_handle_reset con 0x55b81d13c800 session 0x55b81ac04e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172670976 unmapped: 37314560 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 391 ms_handle_reset con 0x55b81b428800 session 0x55b81ac4e540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f525b000/0x0/0x4ffc00000, data 0x46c92ab/0x48ef000, compress 0x0/0x0/0x0, omap 0x6e2c3, meta 0x6041d3d), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 172670976 unmapped: 37314560 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81d242000 session 0x55b81b442540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81de8b400 session 0x55b81ac04fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2891661 data_alloc: 234881024 data_used: 19434699
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81de2d800 session 0x55b81b443880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81b428800 session 0x55b81da2f6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174030848 unmapped: 35954688 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174030848 unmapped: 35954688 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81de8b400 session 0x55b81b86fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.500554085s of 10.814825058s, submitted: 107
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81de88800 session 0x55b81b8bf6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81e0a3400 session 0x55b81dfaa000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 ms_handle_reset con 0x55b81ceb5800 session 0x55b81ac2afc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f522f000/0x0/0x4ffc00000, data 0x46ef296/0x4919000, compress 0x0/0x0/0x0, omap 0x6e3c2, meta 0x6041c3e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 35930112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174055424 unmapped: 35930112 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174071808 unmapped: 35913728 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 392 handle_osd_map epochs [392,393], i have 393, src has [1,393]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 393 ms_handle_reset con 0x55b81b428800 session 0x55b81ac4e380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905841 data_alloc: 234881024 data_used: 19491531
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175120384 unmapped: 34865152 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 394 ms_handle_reset con 0x55b81ceb5800 session 0x55b81aa63340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 394 ms_handle_reset con 0x55b81de88800 session 0x55b81aa62700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 394 ms_handle_reset con 0x55b81de8b400 session 0x55b81b38ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 34545664 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f5203000/0x0/0x4ffc00000, data 0x4716951/0x4945000, compress 0x0/0x0/0x0, omap 0x6f43b, meta 0x6040bc5), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 34545664 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 394 ms_handle_reset con 0x55b81de50000 session 0x55b81ac2bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175480832 unmapped: 34504704 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81de50000 session 0x55b81cef1880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175497216 unmapped: 34488320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2916218 data_alloc: 234881024 data_used: 19553995
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175497216 unmapped: 34488320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81b428800 session 0x55b81da2e000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175497216 unmapped: 34488320 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81de88800 session 0x55b81dddddc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81ceb5800 session 0x55b81da38fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 395 ms_handle_reset con 0x55b81de8b400 session 0x55b81a7708c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175538176 unmapped: 34447360 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f5204000/0x0/0x4ffc00000, data 0x47184ed/0x4948000, compress 0x0/0x0/0x0, omap 0x6f9ea, meta 0x6040616), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.477434158s of 11.568736076s, submitted: 57
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175865856 unmapped: 34119680 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81de8b400 session 0x55b81aa63340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176201728 unmapped: 33783808 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935080 data_alloc: 234881024 data_used: 20958923
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81ceb5800 session 0x55b81d231180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81b428800 session 0x55b81d370000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176300032 unmapped: 33685504 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81de50000 session 0x55b81cef0700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81de88800 session 0x55b81cef0e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176316416 unmapped: 33669120 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176431104 unmapped: 33554432 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 396 ms_handle_reset con 0x55b81de88800 session 0x55b81ac2afc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f51fe000/0x0/0x4ffc00000, data 0x471a099/0x494c000, compress 0x0/0x0/0x0, omap 0x7000b, meta 0x603fff5), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 397 ms_handle_reset con 0x55b81b428800 session 0x55b81dddddc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176570368 unmapped: 33415168 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 397 ms_handle_reset con 0x55b81ceb5800 session 0x55b81ac05880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 397 ms_handle_reset con 0x55b81de50000 session 0x55b81b887a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176750592 unmapped: 33234944 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942481 data_alloc: 234881024 data_used: 21061323
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 398 ms_handle_reset con 0x55b81de8b400 session 0x55b81d371c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f51fe000/0x0/0x4ffc00000, data 0x471bc79/0x494e000, compress 0x0/0x0/0x0, omap 0x70461, meta 0x603fb9f), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176660480 unmapped: 33325056 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 398 ms_handle_reset con 0x55b81b428800 session 0x55b81b4f3500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 398 ms_handle_reset con 0x55b81ceb5800 session 0x55b81cef0000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176709632 unmapped: 33275904 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176709632 unmapped: 33275904 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177201152 unmapped: 32784384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177201152 unmapped: 32784384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.396218300s of 11.526197433s, submitted: 78
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951565 data_alloc: 234881024 data_used: 21058930
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f51eb000/0x0/0x4ffc00000, data 0x472b2fa/0x495f000, compress 0x0/0x0/0x0, omap 0x70b1b, meta 0x603f4e5), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f51eb000/0x0/0x4ffc00000, data 0x472b2fa/0x495f000, compress 0x0/0x0/0x0, omap 0x70b1b, meta 0x603f4e5), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 399 ms_handle_reset con 0x55b81de50000 session 0x55b81dc06c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f51eb000/0x0/0x4ffc00000, data 0x472b2fa/0x495f000, compress 0x0/0x0/0x0, omap 0x70ba1, meta 0x603f45f), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 399 ms_handle_reset con 0x55b81de88800 session 0x55b81df056c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f51ed000/0x0/0x4ffc00000, data 0x472b2fa/0x495f000, compress 0x0/0x0/0x0, omap 0x70c27, meta 0x603f3d9), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177217536 unmapped: 32768000 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2950149 data_alloc: 234881024 data_used: 21038450
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 400 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81b86fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177373184 unmapped: 32612352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 400 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81b38ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 400 ms_handle_reset con 0x55b81b428800 session 0x55b81d715500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177373184 unmapped: 32612352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 401 ms_handle_reset con 0x55b81ceb5800 session 0x55b81ac04540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 401 ms_handle_reset con 0x55b81de50000 session 0x55b81b886e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 32571392 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 401 ms_handle_reset con 0x55b81de88800 session 0x55b81dfab880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 32571392 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 401 ms_handle_reset con 0x55b81de88800 session 0x55b81b4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f51e4000/0x0/0x4ffc00000, data 0x472fa86/0x4966000, compress 0x0/0x0/0x0, omap 0x71354, meta 0x603ecac), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 32571392 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 401 handle_osd_map epochs [401,402], i have 402, src has [1,402]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.950020790s of 10.004167557s, submitted: 51
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2957895 data_alloc: 234881024 data_used: 21038450
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177471488 unmapped: 32514048 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 32505856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 32505856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 402 ms_handle_reset con 0x55b81b428800 session 0x55b81df68700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177479680 unmapped: 32505856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 402 ms_handle_reset con 0x55b81d13c800 session 0x55b81a800000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 402 ms_handle_reset con 0x55b81d242000 session 0x55b81d715dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 402 ms_handle_reset con 0x55b81ceb5800 session 0x55b81aa62700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f51e3000/0x0/0x4ffc00000, data 0x4731505/0x4969000, compress 0x0/0x0/0x0, omap 0x715e4, meta 0x603ea1c), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177504256 unmapped: 32481280 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2948682 data_alloc: 234881024 data_used: 20938098
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177512448 unmapped: 32473088 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 403 ms_handle_reset con 0x55b81b428800 session 0x55b81b7aa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f5203000/0x0/0x4ffc00000, data 0x470f06e/0x4946000, compress 0x0/0x0/0x0, omap 0x71ada, meta 0x603e526), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81d13c800 session 0x55b81ac04e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81ceb5800 session 0x55b81d094700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177537024 unmapped: 32448512 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177553408 unmapped: 32432128 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81d242000 session 0x55b81b8861c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177602560 unmapped: 32382976 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81de50000 session 0x55b81da38e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 404 ms_handle_reset con 0x55b81de88800 session 0x55b81d1d8000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81b428800 session 0x55b81ac04e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175333376 unmapped: 34652160 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81d13c800 session 0x55b81df68700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f79ba000/0x0/0x4ffc00000, data 0x1f5810e/0x2192000, compress 0x0/0x0/0x0, omap 0x71c5e, meta 0x603e3a2), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2701024 data_alloc: 234881024 data_used: 11815759
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f79b5000/0x0/0x4ffc00000, data 0x1f59caa/0x2195000, compress 0x0/0x0/0x0, omap 0x72211, meta 0x603ddef), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.373512268s of 10.458980560s, submitted: 51
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81dfe2800 session 0x55b81da38540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81d242000 session 0x55b81caed880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 405 ms_handle_reset con 0x55b81ceb5800 session 0x55b81b4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175333376 unmapped: 34652160 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 406 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81de1b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 406 ms_handle_reset con 0x55b81b428800 session 0x55b81b86fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 35168256 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 407 ms_handle_reset con 0x55b81d242000 session 0x55b81b86fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 35168256 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 407 ms_handle_reset con 0x55b81de88800 session 0x55b81d370fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175874048 unmapped: 34111488 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 408 ms_handle_reset con 0x55b81b428800 session 0x55b81ac2a380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f79a3000/0x0/0x4ffc00000, data 0x2084124/0x21a7000, compress 0x0/0x0/0x0, omap 0x72e05, meta 0x603d1fb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 409 ms_handle_reset con 0x55b81ceb5800 session 0x55b81b4f2380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 34103296 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 409 ms_handle_reset con 0x55b81d242000 session 0x55b81b38a380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 409 ms_handle_reset con 0x55b81d13c800 session 0x55b81a800000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2741744 data_alloc: 234881024 data_used: 11815971
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175808512 unmapped: 34177024 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81dfe2800 session 0x55b81da2e000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81d4f2c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81b428800 session 0x55b81d370000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dfaa700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175857664 unmapped: 34127872 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 410 ms_handle_reset con 0x55b81d13c800 session 0x55b81df04380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 34103296 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 34103296 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 410 handle_osd_map epochs [410,411], i have 411, src has [1,411]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175890432 unmapped: 34095104 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 411 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81df04a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 412 ms_handle_reset con 0x55b81b428800 session 0x55b81d094540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 412 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 412 ms_handle_reset con 0x55b81d242000 session 0x55b81b8bec40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2753400 data_alloc: 234881024 data_used: 11816442
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f7994000/0x0/0x4ffc00000, data 0x208b79c/0x21b4000, compress 0x0/0x0/0x0, omap 0x73c82, meta 0x603c37e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175898624 unmapped: 34086912 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 175898624 unmapped: 34086912 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.135331154s of 11.393978119s, submitted: 145
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 412 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dddda40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176095232 unmapped: 33890304 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 413 ms_handle_reset con 0x55b81d13c800 session 0x55b81ac04540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 413 ms_handle_reset con 0x55b81b428800 session 0x55b81ac2b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 33873920 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f7970000/0x0/0x4ffc00000, data 0x20b12eb/0x21da000, compress 0x0/0x0/0x0, omap 0x7424c, meta 0x603bdb4), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 413 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b443880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 33873920 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f7972000/0x0/0x4ffc00000, data 0x20b12eb/0x21da000, compress 0x0/0x0/0x0, omap 0x7452d, meta 0x603bad3), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2761308 data_alloc: 234881024 data_used: 11854248
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 33873920 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 414 ms_handle_reset con 0x55b81d242000 session 0x55b81dddc000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 33865728 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 415 ms_handle_reset con 0x55b81ceb0000 session 0x55b81aa63500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 415 ms_handle_reset con 0x55b81ceb5800 session 0x55b81dddc700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 33693696 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 416 ms_handle_reset con 0x55b81b428800 session 0x55b81ac2ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 416 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d0941c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 33636352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 416 ms_handle_reset con 0x55b81ceb0000 session 0x55b81d714540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 33636352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2775137 data_alloc: 234881024 data_used: 11854248
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 33636352 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f7963000/0x0/0x4ffc00000, data 0x20b85da/0x21e7000, compress 0x0/0x0/0x0, omap 0x753fc, meta 0x603ac04), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 417 ms_handle_reset con 0x55b81e0df400 session 0x55b81b887180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176357376 unmapped: 33628160 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.341950417s of 10.519907951s, submitted: 127
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 418 ms_handle_reset con 0x55b81ceb1800 session 0x55b81dddc540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 418 ms_handle_reset con 0x55b81b428800 session 0x55b81dddc000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 418 ms_handle_reset con 0x55b81d242000 session 0x55b81b7ab6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176365568 unmapped: 33619968 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 419 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81cef1340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 419 ms_handle_reset con 0x55b81ceb0000 session 0x55b81d094a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177455104 unmapped: 32530432 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 420 ms_handle_reset con 0x55b81e0df400 session 0x55b81b442a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177537024 unmapped: 32448512 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2803476 data_alloc: 234881024 data_used: 12704412
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177537024 unmapped: 32448512 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f789a000/0x0/0x4ffc00000, data 0x217d438/0x22ae000, compress 0x0/0x0/0x0, omap 0x75c52, meta 0x603a3ae), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 421 ms_handle_reset con 0x55b81b428800 session 0x55b81ced4380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 421 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81a771340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177602560 unmapped: 32382976 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177602560 unmapped: 32382976 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 421 handle_osd_map epochs [421,422], i have 422, src has [1,422]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 422 ms_handle_reset con 0x55b81ceb0000 session 0x55b81df68a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 31481856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176619520 unmapped: 33366016 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 ms_handle_reset con 0x55b81d242000 session 0x55b81ac2bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 ms_handle_reset con 0x55b81d42c800 session 0x55b81b442380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f77be000/0x0/0x4ffc00000, data 0x2255b1f/0x238a000, compress 0x0/0x0/0x0, omap 0x76637, meta 0x60399c9), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2819126 data_alloc: 234881024 data_used: 12719528
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176775168 unmapped: 33210368 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176775168 unmapped: 33210368 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176775168 unmapped: 33210368 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176783360 unmapped: 33202176 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.699217796s of 11.934890747s, submitted: 125
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f77be000/0x0/0x4ffc00000, data 0x2255b1f/0x238a000, compress 0x0/0x0/0x0, omap 0x766bd, meta 0x6039943), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2818702 data_alloc: 234881024 data_used: 12720141
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f77c1000/0x0/0x4ffc00000, data 0x2256b1f/0x238b000, compress 0x0/0x0/0x0, omap 0x766bd, meta 0x6039943), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 ms_handle_reset con 0x55b81b428800 session 0x55b81b887500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 ms_handle_reset con 0x55b81ceb0000 session 0x55b81d4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176791552 unmapped: 33193984 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f77c0000/0x0/0x4ffc00000, data 0x2256b2f/0x238c000, compress 0x0/0x0/0x0, omap 0x76743, meta 0x60398bd), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 425 ms_handle_reset con 0x55b81e0e0000 session 0x55b81de1bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176816128 unmapped: 33169408 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 426 ms_handle_reset con 0x55b81d242000 session 0x55b81d370000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f77bb000/0x0/0x4ffc00000, data 0x2258703/0x238f000, compress 0x0/0x0/0x0, omap 0x76d18, meta 0x60392e8), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 426 ms_handle_reset con 0x55b81e0e7c00 session 0x55b81b38a380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 426 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dddd340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2830626 data_alloc: 234881024 data_used: 12720141
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f77bb000/0x0/0x4ffc00000, data 0x2258703/0x238f000, compress 0x0/0x0/0x0, omap 0x76d18, meta 0x60392e8), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176824320 unmapped: 33161216 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 427 ms_handle_reset con 0x55b81b428800 session 0x55b81a801dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176840704 unmapped: 33144832 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 427 ms_handle_reset con 0x55b81ceb0000 session 0x55b81da2e700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 428 ms_handle_reset con 0x55b81d242000 session 0x55b81d4f2e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 33120256 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 428 ms_handle_reset con 0x55b81e0e0000 session 0x55b81ac04540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 428 ms_handle_reset con 0x55b81e0e0000 session 0x55b81dfaa540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 32284672 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f7284000/0x0/0x4ffc00000, data 0x2781afb/0x28bb000, compress 0x0/0x0/0x0, omap 0x7768b, meta 0x6038975), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 32284672 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2870132 data_alloc: 234881024 data_used: 13142029
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 32284672 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 32284672 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 32276480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 32276480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f7284000/0x0/0x4ffc00000, data 0x2781afb/0x28bb000, compress 0x0/0x0/0x0, omap 0x7768b, meta 0x6038975), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 32276480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.560708046s of 15.713781357s, submitted: 82
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871786 data_alloc: 234881024 data_used: 13252621
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177725440 unmapped: 32260096 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81cef1880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81de2c400 session 0x55b81d095880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d29d340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81ceb0000 session 0x55b81b4f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b8bfdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f7448000/0x0/0x4ffc00000, data 0x25cb51d/0x2703000, compress 0x0/0x0/0x0, omap 0x77c9d, meta 0x6038363), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2878262 data_alloc: 234881024 data_used: 18131418
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81a770380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 32309248 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 ms_handle_reset con 0x55b81de2c400 session 0x55b81b4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177684480 unmapped: 32301056 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 430 ms_handle_reset con 0x55b81e0a3400 session 0x55b81d095500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 430 ms_handle_reset con 0x55b81d6da000 session 0x55b81d715a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177684480 unmapped: 32301056 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 430 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d4f2c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177692672 unmapped: 32292864 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 430 ms_handle_reset con 0x55b81d6da000 session 0x55b81a801dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 177692672 unmapped: 32292864 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.865704536s of 10.005159378s, submitted: 100
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2862510 data_alloc: 234881024 data_used: 16830840
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 431 heartbeat osd_stat(store_statfs(0x4f746e000/0x0/0x4ffc00000, data 0x248411d/0x26de000, compress 0x0/0x0/0x0, omap 0x78867, meta 0x6037799), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180035584 unmapped: 29949952 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 432 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81d1d88c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181805056 unmapped: 28180480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 432 ms_handle_reset con 0x55b81de2c400 session 0x55b81ac2ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 432 ms_handle_reset con 0x55b81e0a3400 session 0x55b81b86fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 432 ms_handle_reset con 0x55b81e0e0000 session 0x55b81ac05500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180133888 unmapped: 29851648 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 433 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d1d8e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181329920 unmapped: 28655616 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 433 ms_handle_reset con 0x55b81d6da000 session 0x55b81b4f2000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 433 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81caec700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 29753344 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f753b000/0x0/0x4ffc00000, data 0x23ab308/0x2607000, compress 0x0/0x0/0x0, omap 0x7923e, meta 0x6036dc2), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2830035 data_alloc: 234881024 data_used: 12837224
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 433 ms_handle_reset con 0x55b81de2c400 session 0x55b81b4f3500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179757056 unmapped: 30228480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179757056 unmapped: 30228480 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 434 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179773440 unmapped: 30212096 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 434 ms_handle_reset con 0x55b81d6da000 session 0x55b81dfaac40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180060160 unmapped: 29925376 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 434 handle_osd_map epochs [434,435], i have 435, src has [1,435]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 435 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81dddd880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 435 ms_handle_reset con 0x55b81de2c400 session 0x55b81aa63340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181116928 unmapped: 28868608 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f7518000/0x0/0x4ffc00000, data 0x23d3ab0/0x2632000, compress 0x0/0x0/0x0, omap 0x7a61f, meta 0x60359e1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.550426483s of 10.001684189s, submitted: 216
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 436 ms_handle_reset con 0x55b81e0e0000 session 0x55b81b8be700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 436 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81df68a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2836476 data_alloc: 234881024 data_used: 12837837
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181125120 unmapped: 28860416 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 436 ms_handle_reset con 0x55b81d6da000 session 0x55b81df68c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f7513000/0x0/0x4ffc00000, data 0x23d554b/0x2635000, compress 0x0/0x0/0x0, omap 0x7a63f, meta 0x60359c1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181125120 unmapped: 28860416 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181125120 unmapped: 28860416 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 437 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81a800000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181133312 unmapped: 28852224 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 437 ms_handle_reset con 0x55b81d242000 session 0x55b81d0941c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 438 ms_handle_reset con 0x55b81d2ad400 session 0x55b81df04a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 438 ms_handle_reset con 0x55b81dc13400 session 0x55b81aa62540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f7511000/0x0/0x4ffc00000, data 0x23d7165/0x2639000, compress 0x0/0x0/0x0, omap 0x7acb8, meta 0x6035348), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181149696 unmapped: 28835840 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2844987 data_alloc: 234881024 data_used: 12838520
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 438 ms_handle_reset con 0x55b81d242000 session 0x55b81b8876c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 438 ms_handle_reset con 0x55b81d6da000 session 0x55b81aa62700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181149696 unmapped: 28835840 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dc07340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181157888 unmapped: 28827648 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81dfebc00 session 0x55b81d29cc40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dc06c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81d242000 session 0x55b81b8861c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81d6da000 session 0x55b81caed340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 439 heartbeat osd_stat(store_statfs(0x4f74f0000/0x0/0x4ffc00000, data 0x23ef90d/0x2654000, compress 0x0/0x0/0x0, omap 0x7b2f2, meta 0x6034d0e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 439 ms_handle_reset con 0x55b81dc13400 session 0x55b81ac04a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 440 ms_handle_reset con 0x55b81d6dbc00 session 0x55b81b4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 440 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81ac056c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181297152 unmapped: 28688384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 440 ms_handle_reset con 0x55b81d242000 session 0x55b81b887dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 440 ms_handle_reset con 0x55b81d6da000 session 0x55b81df04a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 440 handle_osd_map epochs [440,441], i have 441, src has [1,441]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 441 ms_handle_reset con 0x55b81dc13400 session 0x55b81d4f2e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 28467200 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 441 ms_handle_reset con 0x55b81d1f9000 session 0x55b81d29d340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 442 ms_handle_reset con 0x55b81dfebc00 session 0x55b81b4f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 442 ms_handle_reset con 0x55b81d242000 session 0x55b81b4f2fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 442 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dfab6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181526528 unmapped: 28459008 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 442 ms_handle_reset con 0x55b81d6da000 session 0x55b81ac2a8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.975564003s of 10.198469162s, submitted: 141
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2876734 data_alloc: 234881024 data_used: 12854806
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.7 total, 600.0 interval#012Cumulative writes: 26K writes, 94K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 26K writes, 9437 syncs, 2.83 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 38K keys, 12K commit groups, 1.0 writes per commit group, ingest: 28.12 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5299 syncs, 2.32 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 443 ms_handle_reset con 0x55b81dc13400 session 0x55b81d4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 443 ms_handle_reset con 0x55b81d6da000 session 0x55b81da2e000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 443 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b38b340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181575680 unmapped: 28409856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 443 ms_handle_reset con 0x55b81dfebc00 session 0x55b81b887180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d242000 session 0x55b81ac04380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181583872 unmapped: 28401664 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d6d9000 session 0x55b81b8bf6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81de1b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d242000 session 0x55b81ac04e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81de2c400 session 0x55b81d715c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181592064 unmapped: 28393472 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d6da000 session 0x55b81d1d8fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f74a3000/0x0/0x4ffc00000, data 0x2439477/0x26a2000, compress 0x0/0x0/0x0, omap 0x7e161, meta 0x6031e9f), peers [0,2] op hist [1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81dfebc00 session 0x55b81b86fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181616640 unmapped: 28368896 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d6da000 session 0x55b81a801c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81d242000 session 0x55b81cef0e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 ms_handle_reset con 0x55b81de2c400 session 0x55b81dfaac40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 445 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b887880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181624832 unmapped: 28360704 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2883650 data_alloc: 234881024 data_used: 12851295
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 446 ms_handle_reset con 0x55b81e0e7400 session 0x55b81cef1880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 446 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181690368 unmapped: 28295168 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81d242000 session 0x55b81caed6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81d6da000 session 0x55b81d715a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181624832 unmapped: 28360704 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81de2c400 session 0x55b81b86e700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81d0ad000 session 0x55b81df62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f749a000/0x0/0x4ffc00000, data 0x24447dd/0x26b0000, compress 0x0/0x0/0x0, omap 0x7f2af, meta 0x6030d51), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 447 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b443dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181633024 unmapped: 28352512 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 27795456 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f749a000/0x0/0x4ffc00000, data 0x24447dd/0x26b0000, compress 0x0/0x0/0x0, omap 0x7f863, meta 0x603079d), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181575680 unmapped: 28409856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.876866341s of 10.194605827s, submitted: 192
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2898547 data_alloc: 234881024 data_used: 12905741
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81ac2a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181575680 unmapped: 28409856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181575680 unmapped: 28409856 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81d6da000 session 0x55b81b4f2380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81d242000 session 0x55b81ced4380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181288960 unmapped: 28696576 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181297152 unmapped: 28688384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f7405000/0x0/0x4ffc00000, data 0x24d629c/0x2743000, compress 0x0/0x0/0x0, omap 0x7fb8b, meta 0x6030475), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181297152 unmapped: 28688384 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2898123 data_alloc: 234881024 data_used: 12905839
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81d2ac000 session 0x55b81d714700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81de2c400 session 0x55b81b7aa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f7403000/0x0/0x4ffc00000, data 0x24dc29c/0x2749000, compress 0x0/0x0/0x0, omap 0x7fc13, meta 0x60303ed), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2898395 data_alloc: 234881024 data_used: 13057391
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181305344 unmapped: 28680192 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.211813927s of 11.236264229s, submitted: 16
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81de1bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81a800c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181321728 unmapped: 28663808 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 ms_handle_reset con 0x55b81d6da000 session 0x55b81caed6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181329920 unmapped: 28655616 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f7401000/0x0/0x4ffc00000, data 0x24dc30e/0x274b000, compress 0x0/0x0/0x0, omap 0x80139, meta 0x602fec7), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 449 ms_handle_reset con 0x55b81d34fc00 session 0x55b81cef1880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 449 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d1d88c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181346304 unmapped: 28639232 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 449 handle_osd_map epochs [449,450], i have 449, src has [1,450]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d1f8000 session 0x55b81d4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d242000 session 0x55b81ac04540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d6da000 session 0x55b81d4f21c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181354496 unmapped: 28631040 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81de2c400 session 0x55b81d137500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2913848 data_alloc: 234881024 data_used: 13057407
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181354496 unmapped: 28631040 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b38a380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d242000 session 0x55b81b86f880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 186122240 unmapped: 23863296 heap: 209985536 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81caec700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81de2c400 session 0x55b81cb93c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 186785792 unmapped: 35807232 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81dc12000 session 0x55b81df05340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 35569664 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 heartbeat osd_stat(store_statfs(0x4eef69000/0x0/0x4ffc00000, data 0xa96fb0a/0xabe3000, compress 0x0/0x0/0x0, omap 0x80775, meta 0x602f88b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b442540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182943744 unmapped: 39649280 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81a770000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d242000 session 0x55b81da2f6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4076801 data_alloc: 234881024 data_used: 13106543
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 187187200 unmapped: 35405824 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.964659691s of 10.027614594s, submitted: 153
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81de2c400 session 0x55b81d370000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d6da000 session 0x55b81de1a1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d4000 session 0x55b81d4f2e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d1f8000 session 0x55b81dddc540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 39264256 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b8b7800 session 0x55b81dfaa000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81de1b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183353344 unmapped: 39239680 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81d242000 session 0x55b81d094540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 heartbeat osd_stat(store_statfs(0x4e43f4000/0x0/0x4ffc00000, data 0x154e69d4/0x15756000, compress 0x0/0x0/0x0, omap 0x81399, meta 0x602ec67), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81df62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81b7d4000 session 0x55b81d1d8fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81caec380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39206912 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81b8b7800 session 0x55b81b7ab180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81de2c400 session 0x55b81d1d8e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81d1f8000 session 0x55b81df04a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 451 ms_handle_reset con 0x55b81b7d4000 session 0x55b81b4f2380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183410688 unmapped: 39182336 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 452 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81aa71180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4548322 data_alloc: 234881024 data_used: 13106445
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 39174144 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 452 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81d1d8fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 453 ms_handle_reset con 0x55b81b8b7800 session 0x55b81b4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 453 heartbeat osd_stat(store_statfs(0x4e43eb000/0x0/0x4ffc00000, data 0x154631d0/0x156d5000, compress 0x0/0x0/0x0, omap 0x82195, meta 0x602de6b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183214080 unmapped: 39378944 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 453 ms_handle_reset con 0x55b81b7d4000 session 0x55b81df05340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 454 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81b38a380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 454 ms_handle_reset con 0x55b81dfe3800 session 0x55b81d29cfc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183246848 unmapped: 39346176 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81e0de800 session 0x55b81b86fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81d1f8000 session 0x55b81dddddc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d4f3880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39321600 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81b7d4000 session 0x55b81de1a1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81b887340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183304192 unmapped: 39288832 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 455 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 456 ms_handle_reset con 0x55b81e0de800 session 0x55b81a771500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4554939 data_alloc: 234881024 data_used: 12857119
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 456 ms_handle_reset con 0x55b81b428800 session 0x55b81df62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 457 ms_handle_reset con 0x55b81b7d4000 session 0x55b81b38b340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.304566383s of 10.939286232s, submitted: 261
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 458 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81a800a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 458 heartbeat osd_stat(store_statfs(0x4e500b000/0x0/0x4ffc00000, data 0x148bfe3d/0x14b3d000, compress 0x0/0x0/0x0, omap 0x83cda, meta 0x602c326), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 458 ms_handle_reset con 0x55b81dfe3800 session 0x55b81ac04540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 43425792 heap: 222593024 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 458 ms_handle_reset con 0x55b81de4e400 session 0x55b81ac048c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4491741 data_alloc: 218103808 data_used: 6613794
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 459 heartbeat osd_stat(store_statfs(0x4e5009000/0x0/0x4ffc00000, data 0x148c1a67/0x14b41000, compress 0x0/0x0/0x0, omap 0x83ddd, meta 0x602c223), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 183648256 unmapped: 59949056 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 459 heartbeat osd_stat(store_statfs(0x4e0006000/0x0/0x4ffc00000, data 0x198c3502/0x19b44000, compress 0x0/0x0/0x0, omap 0x83f68, meta 0x602c098), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 51077120 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 184451072 unmapped: 59146240 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 184918016 unmapped: 58679296 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 189423616 unmapped: 54173696 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 459 handle_osd_map epochs [459,460], i have 460, src has [1,460]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6115149 data_alloc: 218103808 data_used: 6614379
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 460 heartbeat osd_stat(store_statfs(0x4d2c08000/0x0/0x4ffc00000, data 0x26cc3502/0x26f44000, compress 0x0/0x0/0x0, omap 0x83f68, meta 0x602c098), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 194002944 unmapped: 49594368 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182689792 unmapped: 60907520 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 460 ms_handle_reset con 0x55b81de4e400 session 0x55b81ac4efc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 460 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81da38000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.793249130s of 10.365959167s, submitted: 120
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 460 ms_handle_reset con 0x55b81b428800 session 0x55b81b887500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182689792 unmapped: 60907520 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182722560 unmapped: 60874752 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 461 heartbeat osd_stat(store_statfs(0x4ce002000/0x0/0x4ffc00000, data 0x2b8c6aad/0x2bb48000, compress 0x0/0x0/0x0, omap 0x84731, meta 0x602b8cf), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182779904 unmapped: 60817408 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 462 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81ac04fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 462 heartbeat osd_stat(store_statfs(0x4ce000000/0x0/0x4ffc00000, data 0x2b8c868e/0x2bb4a000, compress 0x0/0x0/0x0, omap 0x84835, meta 0x602b7cb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6445655 data_alloc: 218103808 data_used: 6614636
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182779904 unmapped: 60817408 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182779904 unmapped: 60817408 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 462 ms_handle_reset con 0x55b81b7d4000 session 0x55b81ac2b880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182788096 unmapped: 60809216 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182788096 unmapped: 60809216 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 462 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b7aa1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 462 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81ac2a700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182755328 unmapped: 60841984 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 463 ms_handle_reset con 0x55b81b428800 session 0x55b81dddd180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 463 handle_osd_map epochs [463,464], i have 463, src has [1,464]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6455901 data_alloc: 218103808 data_used: 6877472
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 60792832 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 heartbeat osd_stat(store_statfs(0x4ce001000/0x0/0x4ffc00000, data 0x2b8c86f0/0x2bb4b000, compress 0x0/0x0/0x0, omap 0x84bf8, meta 0x602b408), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81e0de000 session 0x55b81b443180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b429c00 session 0x55b81b442380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81de4fc00 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 heartbeat osd_stat(store_statfs(0x4cdff5000/0x0/0x4ffc00000, data 0x2b8cbd99/0x2bb53000, compress 0x0/0x0/0x0, omap 0x8550c, meta 0x602aaf4), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182820864 unmapped: 60776448 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b428800 session 0x55b81b8876c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d095500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 182853632 unmapped: 60743680 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81b887340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.774193764s of 10.909391403s, submitted: 74
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81e0de000 session 0x55b81b86fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d5400 session 0x55b81df68a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81aa62700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 184680448 unmapped: 58916864 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81dddcfc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 ms_handle_reset con 0x55b81e0de000 session 0x55b81ac2bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81de4fc00 session 0x55b81d1d8380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 465 heartbeat osd_stat(store_statfs(0x4cd243000/0x0/0x4ffc00000, data 0x2c680dc1/0x2c909000, compress 0x0/0x0/0x0, omap 0x85bee, meta 0x602a412), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81b428800 session 0x55b81b442380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b38a000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 62537728 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81cef1880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6538854 data_alloc: 218103808 data_used: 6877393
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 62537728 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 465 ms_handle_reset con 0x55b81e0de000 session 0x55b81d095880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181075968 unmapped: 62521344 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 466 heartbeat osd_stat(store_statfs(0x4cd241000/0x0/0x4ffc00000, data 0x2c682936/0x2c90b000, compress 0x0/0x0/0x0, omap 0x865a7, meta 0x6029a59), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 466 ms_handle_reset con 0x55b81de2c000 session 0x55b81b4f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180740096 unmapped: 62857216 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 467 heartbeat osd_stat(store_statfs(0x4cce67000/0x0/0x4ffc00000, data 0x2ca594ee/0x2cce3000, compress 0x0/0x0/0x0, omap 0x866ab, meta 0x6029955), peers [0,2] op hist [0,0,0,0,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 467 ms_handle_reset con 0x55b81e0e6400 session 0x55b81d137500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 63381504 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 467 ms_handle_reset con 0x55b81b7d5400 session 0x55b81cef1180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180224000 unmapped: 63373312 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6578985 data_alloc: 218103808 data_used: 6877408
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180273152 unmapped: 63324160 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 heartbeat osd_stat(store_statfs(0x4ccde0000/0x0/0x4ffc00000, data 0x2cadfb25/0x2cd6c000, compress 0x0/0x0/0x0, omap 0x86aff, meta 0x6029501), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6580879 data_alloc: 218103808 data_used: 6877993
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 63283200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b428800 session 0x55b81d094700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81aa62380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d4c00 session 0x55b81d137c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b428800 session 0x55b81d29cc40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.697557449s of 15.368885994s, submitted: 219
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 181018624 unmapped: 62578688 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b8861c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d5400 session 0x55b81b7ab6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81e0e6400 session 0x55b81a771c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81e0de000 session 0x55b81d714540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b428800 session 0x55b81b442a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 heartbeat osd_stat(store_statfs(0x4ccddf000/0x0/0x4ffc00000, data 0x2cadfb35/0x2cd6d000, compress 0x0/0x0/0x0, omap 0x86aff, meta 0x6029501), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6623407 data_alloc: 218103808 data_used: 6877993
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 63053824 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81da38c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 heartbeat osd_stat(store_statfs(0x4cb65c000/0x0/0x4ffc00000, data 0x2d0c2b35/0x2d350000, compress 0x0/0x0/0x0, omap 0x86aff, meta 0x71c9501), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180912128 unmapped: 62685184 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6629206 data_alloc: 218103808 data_used: 6880569
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180854784 unmapped: 62742528 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 ms_handle_reset con 0x55b81b7d5400 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180871168 unmapped: 62726144 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 heartbeat osd_stat(store_statfs(0x4cb631000/0x0/0x4ffc00000, data 0x2d0ecb45/0x2d37b000, compress 0x0/0x0/0x0, omap 0x86dc7, meta 0x71c9239), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180871168 unmapped: 62726144 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180871168 unmapped: 62726144 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.936639786s of 10.162817955s, submitted: 23
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81d1f9400 session 0x55b81aa63500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6664426 data_alloc: 234881024 data_used: 11914057
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81de8ac00 session 0x55b81df68000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4cb62c000/0x0/0x4ffc00000, data 0x2d0ee6e1/0x2d37e000, compress 0x0/0x0/0x0, omap 0x87283, meta 0x71c8d7d), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 62701568 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6664449 data_alloc: 234881024 data_used: 11914057
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 180903936 unmapped: 62693376 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 189251584 unmapped: 54345728 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f3880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b4f3c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d136700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81d1f9400 session 0x55b81d4f21c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81de4f800 session 0x55b81df68700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f2e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81caec700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d5400 session 0x55b81dfaa000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81d1f9400 session 0x55b81df04a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190251008 unmapped: 53346304 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca675000/0x0/0x4ffc00000, data 0x2e681753/0x2e32f000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca675000/0x0/0x4ffc00000, data 0x2e681753/0x2e32f000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190341120 unmapped: 53256192 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190341120 unmapped: 53256192 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca675000/0x0/0x4ffc00000, data 0x2e681753/0x2e32f000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6845398 data_alloc: 234881024 data_used: 13646169
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190341120 unmapped: 53256192 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca675000/0x0/0x4ffc00000, data 0x2e681753/0x2e32f000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81dfebc00 session 0x55b81df05a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190349312 unmapped: 53248000 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b428800 session 0x55b81d4a2700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190349312 unmapped: 53248000 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.703333855s of 14.257908821s, submitted: 191
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b4f2fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d5400 session 0x55b81a770380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190701568 unmapped: 52895744 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190554112 unmapped: 53043200 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6845786 data_alloc: 234881024 data_used: 13708633
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca650000/0x0/0x4ffc00000, data 0x2e6ad763/0x2e35c000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 52920320 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6855558 data_alloc: 234881024 data_used: 15322457
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca650000/0x0/0x4ffc00000, data 0x2e6ad763/0x2e35c000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 52912128 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca64f000/0x0/0x4ffc00000, data 0x2e6ae763/0x2e35d000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 52912128 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 52912128 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca64f000/0x0/0x4ffc00000, data 0x2e6ae763/0x2e35d000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 52912128 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.987191200s of 11.000589371s, submitted: 5
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 49258496 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6914262 data_alloc: 234881024 data_used: 15728985
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 49250304 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193937408 unmapped: 49659904 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193937408 unmapped: 49659904 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4c9e36000/0x0/0x4ffc00000, data 0x2eeae763/0x2eb5d000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193945600 unmapped: 49651712 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81e0e6400 session 0x55b81ced5500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81de2cc00 session 0x55b81df04700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81d34f000 session 0x55b81d370fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4c9e36000/0x0/0x4ffc00000, data 0x2eeae763/0x2eb5d000, compress 0x0/0x0/0x0, omap 0x877f5, meta 0x71c880b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 51970048 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b428800 session 0x55b81dc07340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6836900 data_alloc: 234881024 data_used: 10746185
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191635456 unmapped: 51961856 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191635456 unmapped: 51961856 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 heartbeat osd_stat(store_statfs(0x4ca45d000/0x0/0x4ffc00000, data 0x2e8a1753/0x2e54f000, compress 0x0/0x0/0x0, omap 0x8825e, meta 0x71c7da2), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191635456 unmapped: 51961856 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 ms_handle_reset con 0x55b81b7d5400 session 0x55b81b442540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 470 ms_handle_reset con 0x55b81de2cc00 session 0x55b81b7aa1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193019904 unmapped: 50577408 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 470 heartbeat osd_stat(store_statfs(0x4c9ddb000/0x0/0x4ffc00000, data 0x2f376351/0x2ebcf000, compress 0x0/0x0/0x0, omap 0x88869, meta 0x71c7797), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.350826263s of 10.048379898s, submitted: 197
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 471 ms_handle_reset con 0x55b81e0e6400 session 0x55b81b887a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 471 ms_handle_reset con 0x55b81e0e6400 session 0x55b81de1b500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 471 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81b4f3500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 193101824 unmapped: 50495488 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6936130 data_alloc: 234881024 data_used: 10742105
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192864256 unmapped: 50733056 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 472 ms_handle_reset con 0x55b81b428800 session 0x55b81a800c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 472 ms_handle_reset con 0x55b81d34f000 session 0x55b81b4f2a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192897024 unmapped: 50700288 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192905216 unmapped: 50692096 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 473 heartbeat osd_stat(store_statfs(0x4c9dd5000/0x0/0x4ffc00000, data 0x2f379add/0x2ebd5000, compress 0x0/0x0/0x0, omap 0x88d2f, meta 0x71c72d1), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192946176 unmapped: 50651136 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81de2cc00 session 0x55b81caec8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d095dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192954368 unmapped: 50642944 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6832290 data_alloc: 234881024 data_used: 10742089
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192954368 unmapped: 50642944 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81d1f9400 session 0x55b81dddc700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81dfebc00 session 0x55b81caed340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81b428800 session 0x55b81dddc1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 473 ms_handle_reset con 0x55b81b7d2c00 session 0x55b81d393a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190947328 unmapped: 52649984 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81b428800 session 0x55b81d714700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190947328 unmapped: 52649984 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 52641792 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 heartbeat osd_stat(store_statfs(0x4cb232000/0x0/0x4ffc00000, data 0x2da7a255/0x2d72c000, compress 0x0/0x0/0x0, omap 0x89d98, meta 0x71c6268), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81d1f9400 session 0x55b81d136000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81b7d5400 session 0x55b81aa71340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 52641792 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 heartbeat osd_stat(store_statfs(0x4cb232000/0x0/0x4ffc00000, data 0x2da7a255/0x2d72c000, compress 0x0/0x0/0x0, omap 0x89e20, meta 0x71c61e0), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.172925949s of 11.021576881s, submitted: 110
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81e0e6400 session 0x55b81dfab500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6758028 data_alloc: 218103808 data_used: 8437975
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 52641792 heap: 243597312 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 heartbeat osd_stat(store_statfs(0x4cb27f000/0x0/0x4ffc00000, data 0x2da7a265/0x2d72d000, compress 0x0/0x0/0x0, omap 0x8a084, meta 0x71c5f7c), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 228777984 unmapped: 27418624 heap: 256196608 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 220471296 unmapped: 39927808 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 195354624 unmapped: 65044480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81e0e3c00 session 0x55b81d393c00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 heartbeat osd_stat(store_statfs(0x4c7a7f000/0x0/0x4ffc00000, data 0x3127a265/0x30f2d000, compress 0x0/0x0/0x0, omap 0x8a3f8, meta 0x71c5c08), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bfdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d4f36c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 199811072 unmapped: 60588032 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7472125 data_alloc: 218103808 data_used: 7979792
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204398592 unmapped: 56000512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191979520 unmapped: 68419584 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200835072 unmapped: 59564032 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 198017024 unmapped: 62382080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 heartbeat osd_stat(store_statfs(0x4b7a7c000/0x0/0x4ffc00000, data 0x4127bce4/0x40f30000, compress 0x0/0x0/0x0, omap 0x8a691, meta 0x71c596f), peers [0,2] op hist [0,0,0,0,0,0,1,1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202620928 unmapped: 57778176 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.388038635s of 10.005324364s, submitted: 118
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8614517 data_alloc: 218103808 data_used: 7980064
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 56393728 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81d34f000 session 0x55b81ac4e700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81dfebc00 session 0x55b81df636c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81d1f9400 session 0x55b81df04700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81b428800 session 0x55b81dddd340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 195633152 unmapped: 64765952 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81b7d5400 session 0x55b81df62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81d34f000 session 0x55b81da39340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 197517312 unmapped: 62881792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81dfebc00 session 0x55b81b38bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 heartbeat osd_stat(store_statfs(0x4cb27d000/0x0/0x4ffc00000, data 0x2da7bcd4/0x2d72f000, compress 0x0/0x0/0x0, omap 0x8a97d, meta 0x71c5683), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 ms_handle_reset con 0x55b81e0e6400 session 0x55b81dddc380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196599808 unmapped: 63799296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 476 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 476 ms_handle_reset con 0x55b81b7d5400 session 0x55b81a770fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196599808 unmapped: 63799296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 476 ms_handle_reset con 0x55b81d34f000 session 0x55b81d4f3500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6869122 data_alloc: 218103808 data_used: 7980064
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196599808 unmapped: 63799296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196599808 unmapped: 63799296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196624384 unmapped: 63774720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 478 ms_handle_reset con 0x55b81dfebc00 session 0x55b81d1d8fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 478 ms_handle_reset con 0x55b81cf1b000 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 478 heartbeat osd_stat(store_statfs(0x4cb27e000/0x0/0x4ffc00000, data 0x2d494400/0x2d72c000, compress 0x0/0x0/0x0, omap 0x8b497, meta 0x71c4b69), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196640768 unmapped: 63758336 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 478 heartbeat osd_stat(store_statfs(0x4cca95000/0x0/0x4ffc00000, data 0x2b8e3f2d/0x2bb7a000, compress 0x0/0x0/0x0, omap 0x8b621, meta 0x71c49df), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196640768 unmapped: 63758336 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.156767845s of 10.049361229s, submitted: 202
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6668998 data_alloc: 218103808 data_used: 6428079
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 196935680 unmapped: 63463424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 480 ms_handle_reset con 0x55b81b428800 session 0x55b81dddddc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 480 ms_handle_reset con 0x55b81d34f000 session 0x55b81b8bf6c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 480 ms_handle_reset con 0x55b81b7d5400 session 0x55b81df69500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 197427200 unmapped: 62971904 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 481 ms_handle_reset con 0x55b81dfebc00 session 0x55b81aa63340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 481 ms_handle_reset con 0x55b81e0a3000 session 0x55b81df68fc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3266773 data_alloc: 218103808 data_used: 6428079
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f6de3000/0x0/0x4ffc00000, data 0x192935d/0x1bc3000, compress 0x0/0x0/0x0, omap 0x8be45, meta 0x71c41bb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f6de4000/0x0/0x4ffc00000, data 0x192ae48/0x1bc6000, compress 0x0/0x0/0x0, omap 0x8c5e3, meta 0x71c3a1d), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 483 ms_handle_reset con 0x55b81b428800 session 0x55b81df69340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3273005 data_alloc: 218103808 data_used: 6428079
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765593529s of 11.252699852s, submitted: 254
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 483 handle_osd_map epochs [483,484], i have 483, src has [1,484]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81b7d5400 session 0x55b81ac4ec40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 69353472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81d34f000 session 0x55b81d136700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f6ddd000/0x0/0x4ffc00000, data 0x192e557/0x1bcf000, compress 0x0/0x0/0x0, omap 0x8c97f, meta 0x71c3681), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191070208 unmapped: 69328896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 191070208 unmapped: 69328896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81dfebc00 session 0x55b81ac04540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81ceb1000 session 0x55b81a800380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81ceb3800 session 0x55b81b8bf880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 194576384 unmapped: 65822720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81b428800 session 0x55b81caec1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 ms_handle_reset con 0x55b81b7d5400 session 0x55b81dddc700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458333 data_alloc: 218103808 data_used: 6428079
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 485 ms_handle_reset con 0x55b81d34f000 session 0x55b81b7abdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 485 ms_handle_reset con 0x55b81dfebc00 session 0x55b81ac4ea80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 485 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaa700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f3a7d000/0x0/0x4ffc00000, data 0x3ae91b7/0x3d8d000, compress 0x0/0x0/0x0, omap 0x8cf92, meta 0x836306e), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 485 ms_handle_reset con 0x55b81b7d5400 session 0x55b81ac2bdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 486 ms_handle_reset con 0x55b81ceb3800 session 0x55b81b8868c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 486 ms_handle_reset con 0x55b81d34f000 session 0x55b81dfaa1c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f3a7b000/0x0/0x4ffc00000, data 0x3aead45/0x3d8f000, compress 0x0/0x0/0x0, omap 0x8d11b, meta 0x8362ee5), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464553 data_alloc: 218103808 data_used: 6428664
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192487424 unmapped: 67911680 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 486 ms_handle_reset con 0x55b81dfe9400 session 0x55b81da2f880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.714389801s of 10.164520264s, submitted: 90
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192503808 unmapped: 67895296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 487 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaa700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 487 ms_handle_reset con 0x55b81b7d5400 session 0x55b81aa63340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 487 ms_handle_reset con 0x55b81ceb3800 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468255 data_alloc: 218103808 data_used: 6428664
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f3a78000/0x0/0x4ffc00000, data 0x3aee2fc/0x3d92000, compress 0x0/0x0/0x0, omap 0x8d985, meta 0x836267b), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 488 ms_handle_reset con 0x55b81d2ad800 session 0x55b81d1d8e00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192520192 unmapped: 67878912 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 488 handle_osd_map epochs [488,489], i have 488, src has [1,489]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192733184 unmapped: 67665920 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 489 ms_handle_reset con 0x55b81dfe2c00 session 0x55b81dc06c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3507105 data_alloc: 234881024 data_used: 11512925
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f3a74000/0x0/0x4ffc00000, data 0x3aefd8b/0x3d96000, compress 0x0/0x0/0x0, omap 0x8dde9, meta 0x8362217), peers [0,2] op hist [1])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 490 ms_handle_reset con 0x55b81b428800 session 0x55b81df048c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192724992 unmapped: 67674112 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.309647560s of 10.379746437s, submitted: 58
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 491 ms_handle_reset con 0x55b81de52000 session 0x55b81a771500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192733184 unmapped: 67665920 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 491 heartbeat osd_stat(store_statfs(0x4f3a6c000/0x0/0x4ffc00000, data 0x3af34c3/0x3d9c000, compress 0x0/0x0/0x0, omap 0x8e610, meta 0x83619f0), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 491 handle_osd_map epochs [491,492], i have 492, src has [1,492]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192733184 unmapped: 67665920 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192782336 unmapped: 67616768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 493 ms_handle_reset con 0x55b81b7d5400 session 0x55b81da2e000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192798720 unmapped: 67600384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 493 ms_handle_reset con 0x55b81ceb3800 session 0x55b81df62c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517610 data_alloc: 234881024 data_used: 11513197
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192798720 unmapped: 67600384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 192798720 unmapped: 67600384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3a69000/0x0/0x4ffc00000, data 0x3af6caf/0x3da1000, compress 0x0/0x0/0x0, omap 0x8ec7c, meta 0x8361384), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 198950912 unmapped: 61448192 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3a69000/0x0/0x4ffc00000, data 0x3af6caf/0x3da1000, compress 0x0/0x0/0x0, omap 0x8ec7c, meta 0x8361384), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201719808 unmapped: 58679296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 493 ms_handle_reset con 0x55b81d2ad800 session 0x55b81da2fa40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594014 data_alloc: 234881024 data_used: 12892626
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201719808 unmapped: 58679296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201719808 unmapped: 58679296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f2f18000/0x0/0x4ffc00000, data 0x4647d21/0x48f4000, compress 0x0/0x0/0x0, omap 0x8ef68, meta 0x8361098), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201719808 unmapped: 58679296 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.977608681s of 11.364136696s, submitted: 187
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 493 ms_handle_reset con 0x55b81b428800 session 0x55b81d370000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 494 ms_handle_reset con 0x55b81b7d5400 session 0x55b81b4436c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200949760 unmapped: 59449344 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 495 ms_handle_reset con 0x55b81ceb3800 session 0x55b81ac05dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 495 ms_handle_reset con 0x55b81d2ad800 session 0x55b81d4f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200957952 unmapped: 59441152 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600822 data_alloc: 234881024 data_used: 12896836
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200957952 unmapped: 59441152 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 495 ms_handle_reset con 0x55b81de52000 session 0x55b81b4f3a40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 495 handle_osd_map epochs [496,496], i have 496, src has [1,496]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 496 handle_osd_map epochs [496,497], i have 496, src has [1,497]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81de52000 session 0x55b81d714380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81b428800 session 0x55b81a770380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2f0a000/0x0/0x4ffc00000, data 0x464ec8b/0x48fe000, compress 0x0/0x0/0x0, omap 0x8ff35, meta 0x83600cb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81d34f000 session 0x55b81d29cc40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81b7d5400 session 0x55b81d29c540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602544 data_alloc: 234881024 data_used: 12909623
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2f0e000/0x0/0x4ffc00000, data 0x464ec8b/0x48fe000, compress 0x0/0x0/0x0, omap 0x90345, meta 0x835fcbb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.916290283s of 10.067088127s, submitted: 131
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 497 ms_handle_reset con 0x55b81ceb3800 session 0x55b8190f3340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 200990720 unmapped: 59408384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 498 ms_handle_reset con 0x55b81b428800 session 0x55b81d4f3500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201007104 unmapped: 59392000 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 499 ms_handle_reset con 0x55b81b7d5400 session 0x55b81b4f3180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 499 ms_handle_reset con 0x55b81de52000 session 0x55b81b7aa380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616776 data_alloc: 234881024 data_used: 12906526
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201007104 unmapped: 59392000 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 500 ms_handle_reset con 0x55b81d2ad800 session 0x55b81aa62a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f2eff000/0x0/0x4ffc00000, data 0x4653fe8/0x490b000, compress 0x0/0x0/0x0, omap 0x91116, meta 0x835eeea), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f2eff000/0x0/0x4ffc00000, data 0x4653fe8/0x490b000, compress 0x0/0x0/0x0, omap 0x91116, meta 0x835eeea), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201023488 unmapped: 59375616 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 501 ms_handle_reset con 0x55b81b8b6c00 session 0x55b81b7ab340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 501 ms_handle_reset con 0x55b81d34f000 session 0x55b81da2e700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 58327040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 501 ms_handle_reset con 0x55b81b428800 session 0x55b81a771340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 58327040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 501 heartbeat osd_stat(store_statfs(0x4f2efc000/0x0/0x4ffc00000, data 0x4655bf6/0x4910000, compress 0x0/0x0/0x0, omap 0x91505, meta 0x835eafb), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 58327040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628821 data_alloc: 234881024 data_used: 12906526
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 58327040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 502 ms_handle_reset con 0x55b81d2ad800 session 0x55b81b38a380
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 58318848 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 502 handle_osd_map epochs [502,503], i have 503, src has [1,503]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 503 ms_handle_reset con 0x55b81b7d5400 session 0x55b81a771880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202088448 unmapped: 58310656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 503 ms_handle_reset con 0x55b81de52000 session 0x55b81b86fdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.190000534s of 10.289929390s, submitted: 72
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202088448 unmapped: 58310656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 ms_handle_reset con 0x55b81b428800 session 0x55b81b8bf340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 ms_handle_reset con 0x55b81b7d5400 session 0x55b81dddcfc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 ms_handle_reset con 0x55b81d2ad800 session 0x55b81ac2ba40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202104832 unmapped: 58294272 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 ms_handle_reset con 0x55b81d34f000 session 0x55b81dddc540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 heartbeat osd_stat(store_statfs(0x4f2ef1000/0x0/0x4ffc00000, data 0x465932e/0x4916000, compress 0x0/0x0/0x0, omap 0x91e19, meta 0x835e1e7), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 ms_handle_reset con 0x55b81de52000 session 0x55b81aa63500
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3647189 data_alloc: 234881024 data_used: 12906526
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 58556416 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 ms_handle_reset con 0x55b81b428800 session 0x55b81caedc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 58556416 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 58556416 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 ms_handle_reset con 0x55b81d34f000 session 0x55b81a800a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 504 handle_osd_map epochs [505,505], i have 504, src has [1,505]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 201850880 unmapped: 58548224 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 505 ms_handle_reset con 0x55b81e0de000 session 0x55b81ac4e540
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 505 heartbeat osd_stat(store_statfs(0x4f2ef1000/0x0/0x4ffc00000, data 0x465aedf/0x4919000, compress 0x0/0x0/0x0, omap 0x925af, meta 0x835da51), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 506 ms_handle_reset con 0x55b81aca1800 session 0x55b81b4f2a80
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 506 ms_handle_reset con 0x55b81d2ad800 session 0x55b81da38000
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 58343424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-mgr[75599]: log_channel(audit) log [DBG] : from='client.19508 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 00:14:20 np0005603435 ceph-mgr[75599]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 00:14:20 np0005603435 ceph-95d2f419-0dd0-56f2-a094-353f8c7597ed-mgr-compute-0-wyngmr[75595]: 2026-01-31T05:14:20.981+0000 7f77961f6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660826 data_alloc: 234881024 data_used: 12969518
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 507 ms_handle_reset con 0x55b81aca1800 session 0x55b81cef0c40
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 58343424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 507 ms_handle_reset con 0x55b81b428800 session 0x55b81dfaa700
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202055680 unmapped: 58343424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 507 heartbeat osd_stat(store_statfs(0x4f2ee9000/0x0/0x4ffc00000, data 0x466022f/0x4921000, compress 0x0/0x0/0x0, omap 0x9304b, meta 0x835cfb5), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 507 handle_osd_map epochs [508,508], i have 507, src has [1,508]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202072064 unmapped: 58327040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 508 ms_handle_reset con 0x55b81d34f000 session 0x55b81ac05dc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 58318848 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202080256 unmapped: 58318848 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3661476 data_alloc: 234881024 data_used: 12970115
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 508 handle_osd_map epochs [509,509], i have 508, src has [1,509]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.941471100s of 12.177360535s, submitted: 112
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202096640 unmapped: 58302464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 509 handle_osd_map epochs [510,510], i have 509, src has [1,510]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 510 ms_handle_reset con 0x55b81e0de000 session 0x55b81ac4fc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202113024 unmapped: 58286080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 510 heartbeat osd_stat(store_statfs(0x4f2ee1000/0x0/0x4ffc00000, data 0x4665629/0x4929000, compress 0x0/0x0/0x0, omap 0x93c77, meta 0x835c389), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 202129408 unmapped: 58269696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 511 ms_handle_reset con 0x55b81d34f800 session 0x55b81d29d340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 511 ms_handle_reset con 0x55b81d34f800 session 0x55b81de1bc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204374016 unmapped: 56025088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 511 heartbeat osd_stat(store_statfs(0x4f2ee2000/0x0/0x4ffc00000, data 0x46655c7/0x4928000, compress 0x0/0x0/0x0, omap 0x93f89, meta 0x835c077), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204742656 unmapped: 55656448 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707654 data_alloc: 234881024 data_used: 17145873
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204742656 unmapped: 55656448 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f2ade000/0x0/0x4ffc00000, data 0x4a68c28/0x4d2c000, compress 0x0/0x0/0x0, omap 0x94349, meta 0x835bcb7), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204742656 unmapped: 55656448 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204742656 unmapped: 55656448 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 204742656 unmapped: 55656448 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205791232 unmapped: 54607872 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3711292 data_alloc: 234881024 data_used: 17145873
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205791232 unmapped: 54607872 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205791232 unmapped: 54607872 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2adb000/0x0/0x4ffc00000, data 0x4a6a6df/0x4d2f000, compress 0x0/0x0/0x0, omap 0x94aec, meta 0x835b514), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205791232 unmapped: 54607872 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205791232 unmapped: 54607872 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205791232 unmapped: 54607872 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2adb000/0x0/0x4ffc00000, data 0x4a6a6df/0x4d2f000, compress 0x0/0x0/0x0, omap 0x94aec, meta 0x835b514), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.626645088s of 14.831501961s, submitted: 146
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709996 data_alloc: 234881024 data_used: 17145873
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2adb000/0x0/0x4ffc00000, data 0x4a6a6df/0x4d2f000, compress 0x0/0x0/0x0, omap 0x94aec, meta 0x835b514), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709996 data_alloc: 234881024 data_used: 17145873
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2add000/0x0/0x4ffc00000, data 0x4a6a6df/0x4d2f000, compress 0x0/0x0/0x0, omap 0x94aec, meta 0x835b514), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709996 data_alloc: 234881024 data_used: 17145873
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2add000/0x0/0x4ffc00000, data 0x4a6a6df/0x4d2f000, compress 0x0/0x0/0x0, omap 0x94aec, meta 0x835b514), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2add000/0x0/0x4ffc00000, data 0x4a6a6df/0x4d2f000, compress 0x0/0x0/0x0, omap 0x94aec, meta 0x835b514), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 205840384 unmapped: 54558720 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.731771469s of 14.774494171s, submitted: 4
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 54214656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81b7d5400 session 0x55b81b7ab340
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2add000/0x0/0x4ffc00000, data 0x4a6a6df/0x4d2f000, compress 0x0/0x0/0x0, omap 0x94aec, meta 0x835b514), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714524 data_alloc: 234881024 data_used: 17989649
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81aca1800 session 0x55b81b38b180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 54206464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 54206464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 54206464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81b428800 session 0x55b81b8be8c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206200832 unmapped: 54198272 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81d34f000 session 0x55b81d715880
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x943ad, meta 0x835bc53), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206209024 unmapped: 54190080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692840 data_alloc: 234881024 data_used: 17989614
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x943ad, meta 0x835bc53), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206209024 unmapped: 54190080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x943ad, meta 0x835bc53), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206209024 unmapped: 54190080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81aca1800 session 0x55b81b7ab180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81b428800 session 0x55b81b4428c0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206217216 unmapped: 54181888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206225408 unmapped: 54173696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81de53c00 session 0x55b81b8bfc00
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206233600 unmapped: 54165504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81bb32000 session 0x55b81da2f180
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81dfea800 session 0x55b81dfabdc0
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206241792 unmapped: 54157312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 54149120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 54149120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 54149120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206249984 unmapped: 54149120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 54132736 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: do_command 'config diff' '{prefix=config diff}'
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: do_command 'config show' '{prefix=config show}'
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 00:14:20 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206520320 unmapped: 53878784 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 53657600 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207020032 unmapped: 53379072 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'log dump' '{prefix=log dump}'
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'perf dump' '{prefix=perf dump}'
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'perf schema' '{prefix=perf schema}'
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207093760 unmapped: 53305344 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207093760 unmapped: 53305344 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207093760 unmapped: 53305344 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 ms_handle_reset con 0x55b81dc12400 session 0x55b81d4f2fc0
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207093760 unmapped: 53305344 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207093760 unmapped: 53305344 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207093760 unmapped: 53305344 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207101952 unmapped: 53297152 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207101952 unmapped: 53297152 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207101952 unmapped: 53297152 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207110144 unmapped: 53288960 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207110144 unmapped: 53288960 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207110144 unmapped: 53288960 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207110144 unmapped: 53288960 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207110144 unmapped: 53288960 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207110144 unmapped: 53288960 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207110144 unmapped: 53288960 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 112.662727356s of 112.823501587s, submitted: 37
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 53280768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 53280768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 53280768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 53280768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 53280768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 53280768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 53280768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 53280768 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207126528 unmapped: 53272576 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207126528 unmapped: 53272576 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207134720 unmapped: 53264384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207134720 unmapped: 53264384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207134720 unmapped: 53264384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207134720 unmapped: 53264384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207134720 unmapped: 53264384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207134720 unmapped: 53264384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207134720 unmapped: 53264384 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207142912 unmapped: 53256192 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207142912 unmapped: 53256192 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207142912 unmapped: 53256192 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207142912 unmapped: 53256192 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207142912 unmapped: 53256192 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207142912 unmapped: 53256192 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207151104 unmapped: 53248000 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207159296 unmapped: 53239808 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207159296 unmapped: 53239808 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207159296 unmapped: 53239808 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207167488 unmapped: 53231616 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207167488 unmapped: 53231616 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207167488 unmapped: 53231616 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207167488 unmapped: 53231616 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207167488 unmapped: 53231616 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207167488 unmapped: 53231616 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207175680 unmapped: 53223424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207175680 unmapped: 53223424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207175680 unmapped: 53223424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207175680 unmapped: 53223424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207175680 unmapped: 53223424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207175680 unmapped: 53223424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207175680 unmapped: 53223424 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207183872 unmapped: 53215232 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207192064 unmapped: 53207040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207192064 unmapped: 53207040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207192064 unmapped: 53207040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207192064 unmapped: 53207040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207192064 unmapped: 53207040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207192064 unmapped: 53207040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207192064 unmapped: 53207040 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207200256 unmapped: 53198848 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207200256 unmapped: 53198848 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207200256 unmapped: 53198848 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207208448 unmapped: 53190656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207208448 unmapped: 53190656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207208448 unmapped: 53190656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207208448 unmapped: 53190656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207208448 unmapped: 53190656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207208448 unmapped: 53190656 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 53182464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 53182464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 53182464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 53182464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 53182464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 53182464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 53182464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 53182464 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 53166080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 53166080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 53166080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 53166080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 53166080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 53166080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 53166080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 53166080 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 53157888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 53157888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 53157888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 53157888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 53157888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 53157888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 53157888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 53157888 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 53149696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 53149696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 53149696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 53149696 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 53141504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 53141504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 53141504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 53141504 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 53133312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 53133312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 53133312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 53133312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 53133312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 53133312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 53133312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 53133312 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 53125120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 53125120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 53125120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 53125120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 53125120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 53125120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 53125120 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207282176 unmapped: 53116928 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207282176 unmapped: 53116928 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207282176 unmapped: 53116928 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207282176 unmapped: 53116928 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207282176 unmapped: 53116928 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207298560 unmapped: 53100544 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207298560 unmapped: 53100544 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207298560 unmapped: 53100544 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207298560 unmapped: 53100544 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207306752 unmapped: 53092352 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207314944 unmapped: 53084160 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207314944 unmapped: 53084160 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207314944 unmapped: 53084160 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207314944 unmapped: 53084160 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207314944 unmapped: 53084160 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207314944 unmapped: 53084160 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 53075968 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 53075968 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 53075968 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 53075968 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 53075968 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 53075968 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 53075968 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 53075968 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207339520 unmapped: 53059584 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207339520 unmapped: 53059584 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207339520 unmapped: 53059584 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207347712 unmapped: 53051392 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207355904 unmapped: 53043200 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207355904 unmapped: 53043200 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207355904 unmapped: 53043200 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207355904 unmapped: 53043200 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207355904 unmapped: 53043200 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207355904 unmapped: 53043200 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207355904 unmapped: 53043200 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207364096 unmapped: 53035008 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207364096 unmapped: 53035008 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207364096 unmapped: 53035008 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 53026816 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 53026816 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 53026816 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 53026816 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 53026816 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 53018624 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 53018624 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 53018624 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 53018624 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 53018624 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 53018624 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 53018624 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 53018624 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207388672 unmapped: 53010432 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207388672 unmapped: 53010432 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207388672 unmapped: 53010432 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207388672 unmapped: 53010432 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207388672 unmapped: 53010432 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207388672 unmapped: 53010432 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207396864 unmapped: 53002240 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207396864 unmapped: 53002240 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207396864 unmapped: 53002240 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207396864 unmapped: 53002240 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207396864 unmapped: 53002240 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207405056 unmapped: 52994048 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207405056 unmapped: 52994048 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207405056 unmapped: 52994048 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207405056 unmapped: 52994048 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207405056 unmapped: 52994048 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207413248 unmapped: 52985856 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207413248 unmapped: 52985856 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207413248 unmapped: 52985856 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207421440 unmapped: 52977664 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207421440 unmapped: 52977664 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207421440 unmapped: 52977664 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207421440 unmapped: 52977664 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207421440 unmapped: 52977664 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207421440 unmapped: 52977664 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207421440 unmapped: 52977664 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 52969472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 52969472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 52969472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 52969472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 52969472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 52969472 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207437824 unmapped: 52961280 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207437824 unmapped: 52961280 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207446016 unmapped: 52953088 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.7 total, 600.0 interval#012Cumulative writes: 31K writes, 120K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.73 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5204 writes, 25K keys, 5204 commit groups, 1.0 writes per commit group, ingest: 16.02 MB, 0.03 MB/s#012Interval WAL: 5204 writes, 2245 syncs, 2.32 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 52944896 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 52936704 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 52928512 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 52920320 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 52912128 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 52912128 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 52912128 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 52912128 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 52912128 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 52912128 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 52912128 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 52903936 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 52887552 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 295.891906738s of 295.928131104s, submitted: 22
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 52879360 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 52830208 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 52830208 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 52830208 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 52830208 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 52830208 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 52830208 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 52830208 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 52830208 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207577088 unmapped: 52822016 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207577088 unmapped: 52822016 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207577088 unmapped: 52822016 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207577088 unmapped: 52822016 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207577088 unmapped: 52822016 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207577088 unmapped: 52822016 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207577088 unmapped: 52822016 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207577088 unmapped: 52822016 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207585280 unmapped: 52813824 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207585280 unmapped: 52813824 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207585280 unmapped: 52813824 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207585280 unmapped: 52813824 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207585280 unmapped: 52813824 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207585280 unmapped: 52813824 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207585280 unmapped: 52813824 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207593472 unmapped: 52805632 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207601664 unmapped: 52797440 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 52789248 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207618048 unmapped: 52781056 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207626240 unmapped: 52772864 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207634432 unmapped: 52764672 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207634432 unmapped: 52764672 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207634432 unmapped: 52764672 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207634432 unmapped: 52764672 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207634432 unmapped: 52764672 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207634432 unmapped: 52764672 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207634432 unmapped: 52764672 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207642624 unmapped: 52756480 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207650816 unmapped: 52748288 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207659008 unmapped: 52740096 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207659008 unmapped: 52740096 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207659008 unmapped: 52740096 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207659008 unmapped: 52740096 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207659008 unmapped: 52740096 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207659008 unmapped: 52740096 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207659008 unmapped: 52740096 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207659008 unmapped: 52740096 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207667200 unmapped: 52731904 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207667200 unmapped: 52731904 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207667200 unmapped: 52731904 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207667200 unmapped: 52731904 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207667200 unmapped: 52731904 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207667200 unmapped: 52731904 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 52723712 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 52723712 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 52723712 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 52723712 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 52723712 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207683584 unmapped: 52715520 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207683584 unmapped: 52715520 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207683584 unmapped: 52715520 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207691776 unmapped: 52707328 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207691776 unmapped: 52707328 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207691776 unmapped: 52707328 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207691776 unmapped: 52707328 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207691776 unmapped: 52707328 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207691776 unmapped: 52707328 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207691776 unmapped: 52707328 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207691776 unmapped: 52707328 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207699968 unmapped: 52699136 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207699968 unmapped: 52699136 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207699968 unmapped: 52699136 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207699968 unmapped: 52699136 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207708160 unmapped: 52690944 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207708160 unmapped: 52690944 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207708160 unmapped: 52690944 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207708160 unmapped: 52690944 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207708160 unmapped: 52690944 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207708160 unmapped: 52690944 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207716352 unmapped: 52682752 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207724544 unmapped: 52674560 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207724544 unmapped: 52674560 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207724544 unmapped: 52674560 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207724544 unmapped: 52674560 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207724544 unmapped: 52674560 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207732736 unmapped: 52666368 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207749120 unmapped: 52649984 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207757312 unmapped: 52641792 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207773696 unmapped: 52625408 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207781888 unmapped: 52617216 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207781888 unmapped: 52617216 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207781888 unmapped: 52617216 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207781888 unmapped: 52617216 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207781888 unmapped: 52617216 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207781888 unmapped: 52617216 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207781888 unmapped: 52617216 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207790080 unmapped: 52609024 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207790080 unmapped: 52609024 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207790080 unmapped: 52609024 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207790080 unmapped: 52609024 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207790080 unmapped: 52609024 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207790080 unmapped: 52609024 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207790080 unmapped: 52609024 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207790080 unmapped: 52609024 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207798272 unmapped: 52600832 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207798272 unmapped: 52600832 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207798272 unmapped: 52600832 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 52592640 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 52592640 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 52592640 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 52592640 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207806464 unmapped: 52592640 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692968 data_alloc: 234881024 data_used: 17989712
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207814656 unmapped: 52584448 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207814656 unmapped: 52584448 heap: 260399104 old mem: 2845415832 new mem: 2845415832
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f2ede000/0x0/0x4ffc00000, data 0x466a6bc/0x492e000, compress 0x0/0x0/0x0, omap 0x944b9, meta 0x835bb47), peers [0,2] op hist [])
Jan 31 00:14:21 np0005603435 ceph-osd[86873]: prioritycache tune_memory target: 4294967296 mapped: 207822848 unmapped: 52576256 heap: 260399104 old mem: 2845415832 new mem: 2845415832
